How to handle uncertainties in modelling due to human reliability issues for nuclear disposals
Abstract. Modelling plays a crucial role in assessing the design for future technical or geological development of a repository for radioactive waste. Models and the application of these models to scenarios are used to weigh different safety-related designs or to assess the suitability of sites. Even if the models simulate and evaluate a system in a geological context that is designed as a passively safe system, the human factor plays a significant role in the overall assessment process and thus in finding a site with the best possible safety in accordance with the Site Selection Act. This influence is not seen at the level of repository operation, as is traditionally viewed with regard to human factors, but rather in the design of the repository – particularly in the decision-making process and the definition of the system's fundamental design parameters. Thus, considerations of human reliability are also of utmost importance for the passively safe system of a repository, especially in the current phase of the search and evaluation process. Given that severe accidents in man-made technological systems depend heavily on the reliability of human behaviour, not only in operation but also in design and conceptualization, considering human reliability aspects is essential for a successful site selection (Straeter 2019).
This article first provides an overview of the technical, organizational, cross-organizational, and individual aspects of human reliability that are crucial in the modelling phase of radioactive waste management. Human aspects include variations in the selection of models, the definition of input parameters, and the interpretation of results as individual or group efforts. Based on a review of relevant guidelines on the topic (VDI 4006), suggestions are presented for dealing with these human factors at different levels. The results of a study on the importance of these factors are presented, which was carried out in the context of the TRANSENS project in cooperation between the University of Kassel and the TU Clausthal in order to demonstrate the importance of the human factor in modelling. Overall, based on these considerations, the AHRIC (Assessment of Human Reliability in Concept phases) method is proposed to assess the negative effects of trust issues in the site selection work processes and to derive mitigating measures (Fritsch, 2025). The method applies to all work processes of the key actors.
Title: How to handle uncertainties in modelling due to human reliability issues for nuclear disposals
Authors: O. Straeter, F. Fritsch
Review
General comments:
The paper is addressing the various types of human bias encountered in the safety case for nuclear waste repositories, it shows ways to identify them and avoid them.
In table 1, a few more modelling steps could be added that are regularly required: completeness of data survey (when is enough enough); details and type of documentation (to support the traceability of decisions); implicit assumptions (often hidden in larger models / input files / databases / code packages). It is also advised to put “result evaluation” above “result interpretation”.
Concerning the parameterization of models, it is dangerous (but often observed) that modellers just use the databases coupled to code packages that they have paid for without checking the origin and quality of these parameter sets. Another point with parameterisation is the often individually biased selection of process and their uncertainty to be discussed when it comes to the categories “unknown knowns” and “unknown unkowns”, i.e. how to deal with missing understanding and parameters (ignorance vs. uncertainty).
Specific comments:
In figure 2 (although taken from another activity) ”competence” should also be fed from “evaluation”, in many areas benchmarking (between codes, and also with respect to real field data or experimental results) are well-respected factors to generate trust. Examples are the huge international consortia behind DECOVALEX or JOSA.
The work explains in great detail the biases linked to group structures and behaviour. However, it should also be mentioned that in many circumstance a “four-eye-principle” could on the contrary be beneficial to steps in modelling issues. This is closely connected to the role of “external peers” and review processes.
Line 159: URS should be spelled out and a link to the project given.
Lines 195ff: An example would be very beneficial for the reader to understand what the entries in Figure 5 (which, by the way, could easily be turned into a table) are really meaning; currently this is very generic, data (response scale) interpretation without associated statements is not clear. Figure 6 does obviously focus already on the next step; it is not explained where the p (Success) values are coming form – and why there are only four distinct numbers in addition to 1 and 0. In addition, the computation of the numbers in the right-most column is unclear.
Line 250: What is the meaning of “heurism” in that context?
Technical comments:
Lines 223-226 are strongly redundant to lines 210+. Should be merged.
Line 244: “and” instead of “ans”
An acknowledgement of TRANSENS is missing.