How to handle uncertainties in modelling due to human reliability issues for nuclear disposals
Download
- Final revised paper (published on 05 Dec 2025)
- Preprint (discussion started on 03 Jul 2025)
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on sand-2025-3', Anonymous Referee #1, 14 Aug 2025
- AC1: 'Reply on RC1', Oliver Straeter, 24 Oct 2025
-
RC2: 'Comment on sand-2025-3', Anonymous Referee #2, 07 Oct 2025
- AC2: 'Reply on RC2', Oliver Straeter, 24 Oct 2025
Peer review completion
AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
AR by Oliver Straeter on behalf of the Authors (24 Oct 2025)
Author's response
Author's tracked changes
Manuscript
ED: Publish subject to technical corrections (03 Nov 2025) by Ingo Kock
AR by Oliver Straeter on behalf of the Authors (05 Nov 2025)
Author's response
Manuscript
Title: How to handle uncertainties in modelling due to human reliability issues for nuclear disposals
Authors: O. Straeter, F. Fritsch
Review
General comments:
The paper is addressing the various types of human bias encountered in the safety case for nuclear waste repositories, it shows ways to identify them and avoid them.
In table 1, a few more modelling steps could be added that are regularly required: completeness of data survey (when is enough enough); details and type of documentation (to support the traceability of decisions); implicit assumptions (often hidden in larger models / input files / databases / code packages). It is also advised to put “result evaluation” above “result interpretation”.
Concerning the parameterization of models, it is dangerous (but often observed) that modellers just use the databases coupled to code packages that they have paid for without checking the origin and quality of these parameter sets. Another point with parameterisation is the often individually biased selection of process and their uncertainty to be discussed when it comes to the categories “unknown knowns” and “unknown unkowns”, i.e. how to deal with missing understanding and parameters (ignorance vs. uncertainty).
Specific comments:
In figure 2 (although taken from another activity) ”competence” should also be fed from “evaluation”, in many areas benchmarking (between codes, and also with respect to real field data or experimental results) are well-respected factors to generate trust. Examples are the huge international consortia behind DECOVALEX or JOSA.
The work explains in great detail the biases linked to group structures and behaviour. However, it should also be mentioned that in many circumstance a “four-eye-principle” could on the contrary be beneficial to steps in modelling issues. This is closely connected to the role of “external peers” and review processes.
Line 159: URS should be spelled out and a link to the project given.
Lines 195ff: An example would be very beneficial for the reader to understand what the entries in Figure 5 (which, by the way, could easily be turned into a table) are really meaning; currently this is very generic, data (response scale) interpretation without associated statements is not clear. Figure 6 does obviously focus already on the next step; it is not explained where the p (Success) values are coming form – and why there are only four distinct numbers in addition to 1 and 0. In addition, the computation of the numbers in the right-most column is unclear.
Line 250: What is the meaning of “heurism” in that context?
Technical comments:
Lines 223-226 are strongly redundant to lines 210+. Should be merged.
Line 244: “and” instead of “ans”
An acknowledgement of TRANSENS is missing.