PSAM12 - Probabilistic Safety Assessment and Management
Wednesday, June 25, 2014

Program Book - Schedule - Monday - Tuesday - Wednesday - Thursday - Friday - SEARCH PAPERS

KEY: -Paper; -Biography; -Presentation


Sessions:
Plenary - W01 - W02 - W03 - W04 - W05 - W06 - W07 - W011 - W012 - W013 - W014 - W015 - W016 - W017 - W021 - W022 - W023 - W024 - W025 - W026 - W027


W00 Plenary Session:

9:00 AM

A New Look at Risk Assessment in Cancer: The Molecular Era

Sandeep Bobby Reddy MD

Clinical Associate Professor, Geffen/UCLA School of Medicine
Chief Medical Officer, CARIS Life Sciences

Bio: Graduated from UCLA School of Medicine, trained in Internal Medicine at Harbor-UCLA Medical Center, and fellowship in Medical Oncology and Hematology at City of Hope National Medical center. Currently in practice at Los Alamitos Hematology Oncology with an academic position as clinical instructor at Harbor-UCLA Medical Center. Dr. Reddy has authored numerous publications and given presentations at National and International meetings and worked extensively as a consultant in the field of molecular diagnostics. He is currently Chief Medical Officer of CARIS Life Sciences. His presentation will focus on the evolving paradigm shift away from statistically modeled risk assessment tools to individualized risk assessment through rapid technology changes in the field.

W01 Aviation and Space II

10:30 Honolulu

Chair: Gary Duncan, ARES Aerospace & Technology Services

20

Reliability-Based Design Optimization of Space Tether Considering Hybrid Uncertainty

Liping He, Jian Xiao, Tao Zhao (a), Yi Chen (b), Shuchun Duan (a)

a) School of Mechanics, Electronic, and Industrial Engineering, University of Electronic Science and Technology of China, Chengdu, China, b) School of Engineering and Built Environment, Glasgow CaledonianUniversity, Glasgow, UK

Space tether is widely used in the field of global space and its reliability problem increasingly become one of research hotspots in the space field. In order to address such problems as low model versatility, high computational complexity and heavy computation workload, referring to the deployable motorized momentum exchange tether (DMMET) as the engineering background, this paper aims to investigate and study a methodology framework which can deal with hybrid uncertainty factors in design optimization After introducing the structure and application characteristics of DMMET, this paper firstly analyses the performance characteristics and uncertain factors of the deployable mechanism, which is used to establish the neural network surrogate model of the deployable mechanism’s strength and can enable scalability of deployable mechanism. Specifically, it mainly includes the load uncertainty, the uncertainties of system parameters and calculation model. In this paper, we mainly discuss the uncertainty from system parameters, including the various bar size of the developable mechanism (length, width and thickness) and material performance parameters (elastic modulus, density, strength, etc.). Finally, we obtain a more satisfactory optimization results by reliability-based design optimization of a specific developable mechanisms. Numerical examples verify that this method is feasible and has higher solution accuracy, which can offer reference to the DMMET engineering design.

280

Using Subset Simulation to Quantify Stakeholder Contribution to Runway Overrun

Ludwig Drees, Chong Wang, and Florian Holzapfel

Institute of Flight System Dynamics, Technische Universität München, Garching, Germany

This paper studies the use of sensitivities to quantify the extent to which individual stakeholders contribute to the incident of runway overrun. For that purpose, we present a model of the incident runway overrun. The incident model is based on the dynamics of aircraft and describes the functional relationship between contributing factors leading to the incident. The incident model also takes operational dependencies into account. Model input are the probability distributions of the contributing factors, which are obtained by fitting distributions to data of a fictive airline. By propagating the probability distributions through our incident model, we are able to make statistical valid statements of the occurrence probability of the incident itself. Therefore, we use the subset simulation method. By estimating the design point using the samples of the subset simulation we obtain the sensitivities by applying the First-Order Reliability method. The sensitivities are used to quantify stakeholder contribution to the incident runway overrun by allocating the various stakeholders to the contributing factors.

476

International Space Station End-of-Life Probabilistic Risk Assessment

Gary Duncan

ARES Technical Services, Houston, TX, USA

Although there are ongoing efforts to extend the ISS life cycle through 2028, the International Space Station (ISS) end-of-life (EOL) cycle is currently scheduled for 2020. The EOL for the ISS will require de-orbiting the ISS. This will be the largest manmade object ever to be de-orbited, therefore safely de-orbiting the station will be a very complex problem. This process is being planned by NASA and its international partners. Numerous factors will need to be considered to accomplish this such as target corridors, orbits, altitude, drag, maneuvering capabilities, debris mapping etc. The ISS EOL Probabilistic Risk Assessment (PRA) will play a part in this process by estimating the reliability of the hardware supplying the maneuvering capabilities. The PRA will model the probability of failure of the systems supplying and controlling the thrust needed to aid in the de-orbit maneuvering.

W02 Reliability Analysis and Risk Assessment Methods V

10:30 Kahuku

Chair: Mohammad Pourgol-Mohammad, Sahand University of Technology

533

MCSS Based Numerical Simulation for Reliability Evaluation of Repairable System in NPP

Daochuan Ge (a,b), Ruoxing Zhang, Qiang Chou (b), Yanhua Yang (a)

a) School of Nuclear Science and Engineering, Shanghai Jiao Tong University, Shanghai, China, b) Software Development Center, State Nuclear Power Technology Corporation, Beijing, China

The quantitative analyses of Nuclear Power Plant (NPP)’s repairable systems are conventionally Markov-based methods. The thing is, systems’ state space grows exponentially with the increase of basic events, which makes the problem hard or even impossible to solve. In addition, the maintenance /test activities are frequently imposed on some safety-critical components, which make the Markov based approach unavailable. In this paper, a new numerical simulation approach based on MCSS (Minimal Cut Sequence Set) is proposed, which can get over the shortcomings of the conventional Markov method. Two typical cases are analyzed and results indicate that the new approach is correct as well as feasible.

410

Design for Reliability of Complex System with Limited Failure Data; Case Study of a Horizontal Drilling Equipment

Morteza Soleimani (a), Mohammad Pourgol-Mohammad (b)

a) Tabriz University, Tabriz, Iran, b) Sahand University of Technology, Tabriz, Iran

In this paper, a methodology is developed for reliability evaluation of electromechanical systems. The method is applicable in early design phase where there is only limited failure data available. When experimental failure data is scarce, generic failure data are searched from some related reliability data banks. In this method, Reliability Block Diagrams (RBD) is used for modeling the system reliability. Monte Carlo Simulation technique is employed to simulate the system for reliability and availability calculation. Current methodology contains the reliability importance analysis and reliability allocation to optimize the reliability. Evaluating reliability of complex systems in reverse engineering (competitive) design phase is one of the applications of this method. As a case study, a horizontal drilling equipment is used for assessment of the proposed method. According to the results, motor sub-system and hydraulic sub-system are the critical elements from reliability point of view. A comparison of the results is done with the results of reliability evaluation for a system with more failure and maintenance data available. Benchmark of the results indicates the effectiveness and performance quality of presented method for reliability evaluating of systems.

514

Analyzing Simulation-Based PRA Data Through Clustering: a BWR Station Blackout Case Study

Dan Maljovec, Shusen Liu, BeiWang, Valerio Pascucci (a), Peer-Timo Bremer (b), Diego Mandelli, and Curtis Smith (c)

a) SCI Institute, University of Utah, Salt Lake City, USA, b) Lawrence Livermore National Laboratory, Livermore, USA, c) Idaho National Laboratory, Idaho Falls, USA

Dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP, MELCOR) with simulation controller codes (e.g., RAVEN, ADAPT). Whereas system simulator codes accurately model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic, operating procedures) and stochastic (e.g., component failures, parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by 1) sampling values of a set of parameters from the uncertainty space of interest (using the simulation controller codes), and 2) simulating the system behavior for that specific set of parameter values (using the system simulator codes). For complex systems, one of the major challenges in using DPRA methodologies is to analyze the large amount of information (i.e., large number of scenarios ) generated, where clustering techniques are typically employed to allow users to better organize and interpret the data. In this paper, we focus on the analysis of a nuclear simulation dataset that is part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We apply a software tool that provides the domain experts with an interactive analysis and visualization environment for understanding the structures of such high-dimensional nuclear simulation datasets. Our tool encodes traditional and topology-based clustering techniques, where the latter partitions the data points into clusters based on their uniform gradient flow behavior. We demonstrate through our case study that both types of clustering techniques complement each other in bringing enhanced structural understanding of the data.

456

Quantification of MCS with BDD, Accuracy and Inclusion of Success in the Calculation – the RiskSpectrum MCS BDD Algorithm

Wei Wang, Ola Bäckström (a), and Pavel Krcal (a,b)

a) Lloyd's Register Consulting, Stockholm, Sweden, b) Uppsala University, Uppsala, Sweden

A quantification of a PSA can be performed through different techniques, of which the Minimal Cut Set (MCS) generation technique and Binary Decision Diagrams (BDD) are the most well known. There is only one advantage with the MCS approach compared to the BDD approach -calculation time, or rather, the capability to always solve the problem. In most cases the MCS approach is fully sufficient. But as the number of high probability events increases, e.g. due to seismic risk assessments, more accurate methods may be necessary. In some applications, a relevant numerical treatment of success in event trees may also be required to avoid overly conservative results. We discuss the quantification algorithm in RiskSpectrum MCS BDD, especially with regard to success in event trees. A BDD for both the failure and success parts of a sequence can be generated separately -and thereafter, the BDD structures can be combined. Under some conditions, this calculation will yield exactly the same result as if a complete BDD for both the failure and success parts was generated. Properties of the algorithm are also demonstrated on several examples including a large size PSA.

573

Developing a New Fire PRA Framework by Integrating Probabilistic Risk Assessment with a Fire Simulation Module

Tatsuya Sakurahara, Seyed A. Reihani, Zahra Mohaghegh, Mark Brandyberry (a), Ernie Kee (b) , David Johnson (c) , Shawn Rodgers (d), and Mary Anne Billings (d)

a) The University of Illinois at Urbana-Champaign, Urbana, IL, USA, b) YK.risk, LLC, Bay City, TX, USA, c) ABS Consulting Inc., Irvine, CA, USA, d) South Texas Project Nuclear Operating Company, Wadsworth,TX, USA

Recently, the fire protection programs at nuclear power plants have been transitioned to a risk-informed approach utilizing Fire Probabilistic Risk Assessment (Fire PRA). One of the main limitations of the current methodology is that it is not capable of adequately accounting for the dynamic behavior and effects of fire due to its reliance on the classical PRA methodology (i.e., Event Trees and Fault Trees). As a solution for this limitation, in this paper we propose an integrated framework for Fire PRA. This method falls midway between a classical and a fully dynamic PRA with respect to the utilization of simulation techniques. In the integrated framework, some of the fire-related Fault Trees are replaced with a Fire Simulation Module (FSM), which is linked to a plant-specific PRA model. The FSM is composed of simulation-based physical models for fire initiation, progression, and post-fire failure. Moreover, FSM includes the uncertainty propagation in the physical models and input parameters. These features will reduce the unnecessary conservativeness in the current Fire PRA methodology by modeling the underlying physical phenomena and by considering the dynamic interactions among them.

W03 Marine Engineering

10:30 O'ahu

Chair: Yung Hsien Chang, U.S. Nuclear Regulatory Commission

347

Lessons Learned from the US HRA Empirical Study

Huafei Liao (a), John Forester (a,b), Vinh N. Dang (c), Andreas Bye (d), Erasmia Lois, Y. James Chang (e)

a) Sandia National Laboratories, Albuquerque, NM, USA, b) Idaho National Laboratory, Idaho Falls, ID, USA, c) Paul Scherrer Institute, Villigen PSI, Switzerland, d) OECD Halden Reactor Project, Institute forEnergy Technology, IFE, Halden, Norway, e) U.S. Nuclear Regulatory Commission, Washington, DC, USA

The US Human Reliability Analysis (HRA) Empirical Study (referred to as the US Study in the article) was conducted to confirm and expand on the insights developed from the International HRA Empirical Study (referred to as the International Study). Similar to the International Study, the US Study evaluated the performance of different HRA methods by comparing method predictions to actual crew performance in simulated accident scenarios conducted in a US nuclear power plant (NPP) simulator. In addition to identification of some new HRA and method related issues, the study design of the US Study allowed insights to be obtained on some issues that were not addressed in the International Study. In particular, because multiple HRA teams applied each method in the US Study, comparing their analyses and predictions allowed separation of analyst effects from method effects and allowed conclusions to be drawn on aspects of methods that are susceptible to different application or usage by different analysts that may lead to differences in results. The findings serve as a strong basis for improving the consistency and robustness of HRA, which in turn facilitates identification of mechanisms for improving operating crew performance in NPPs.

360

Extracting Human Reliability Information from Data Collected at Different Simulators: A Feasibility Test on Real Data

Salvatore Massaiu

OECD Halden Reactor Project, Halden, Norway

This paper presents a feasibility test on extracting HRA-relevant information form data collected at different plant/simulators. Newly proposed methodologies for HRA simulator-data collection are trying to overcome the aggregation and generalization problems that stranded previous attempts at the creation of HRA data banks. Common to the different methodologies is that they insist on the need to precisely characterize the performance conditions. The difference is on the type of information they aim to collect, some focus on failure probabilities, other on situational influences on the performance of the join human-machine system. This paper investigates whether further information of use for HRA, like timing and performance variability, could be added to and extracted from such databases. The data used in this test derive from three simulator experiments. Two experiments were conducted at the Halden Human-Machine Laboratory, while the third at a training simulator at a U.S. nuclear power plant. All together 23 crews of licensed operators from four plants in two countries participated, and ten emergency scenarios were run. The test considers the data as a subset from a larger database, selected by a HRA user as relevant for the target application. The test shows that it is possible to extract three types of HRA-relevant data from records obtained at different simulators and plants: mean times of actions and diagnoses, response-time variability for critical actions, and standardized margins-to-failure information. This paper shows the feasibility of including and re-using traditional types of HRA data in newly proposed approaches to HRA database construction.

380

Simplified Human Reliability Analysis Process for Emergency Mitigation Equipment (EME) Deployment

Don E. MacLeod, Gareth W. Parry, Barry D. Sloane (a), Paul Lawrence (b), Eliseo M. Chan (c), and Alexander V. Trifanov (d)

a) ERIN Engineering and Research, Inc., Walnut Creek, USA, b) Ontario Power Generation, Inc., Pickering, Canada, c) Bruce Power, Toronto, Canada, d) Kinectrics, Inc., Pickering, Canada

For a variety of different reasons, it is becoming more common for nuclear power plants to incorporate the use of portable equipment, for example, mobile diesel pumps or power generators, in their accident mitigation strategies. In order for the Probabilistic Risk Assessment (PRA) to reflect the as-built, as-operated plant, it is necessary to include these capabilities in the model. However, current Human Reliability Analysis (HRA) methodologies that are commonly used in the nuclear power industry are not designed to accommodate the evaluation of some of the tasks associated with the use of portable equipment, such as retrieving equipment and making temporary power and pipe connections. This paper proposes a method for estimating the component of the human error probability (HEP) associated with the deployment of portable equipment. The other components of the HEP, such as the failure to identify the need to initiate portable equipment deployment, can be addressed with existing methodologies and are not addressed by this approach. This approach is intended for application to a variety of hazard risk assessments, including internal events, internal flooding, high winds, internal fires, external flooding, and seismic events.

126

Study on Operator Reliability of Digital Control System in Nuclear Power Plants Based on Boolean Network

Yanhua Zou, Li Zhang (a,b,c), Licao Dai, Pengcheng Li (c)

a) Institute of Human Factors Engineering and Safety Management, Hunan Institute of Technology, Hengyang, China, b) School of Nuclear Science and Technology, University of South China, Hengyang, China,c) Human Factor Institute, University of South China, Hengyang, China

The current human reliability analysis method of analyzing system operator’s reliability, carried out from the perspective of operators themselves, is relatively static, for it hasn’t taken the effect of system evolution on the operators’ performance into consideration. In view of operator reliability in digital control system in nuclear power plant, this paper, based on boolean network theory, tries to explore the operators’ behavior in the dynamic logic process of system evolution, aiming at finding out the dynamic evolution process of human-system interaction. A new technique, called the semi-tensor product of matrices, can convert the logical systems into standard discrete-time dynamic systems, and then the discrete-time linear equation and reliability analysis model are established. Data collected from simulation experiments carried out in full-size simulator in LingDong Nuclear Power Plant is found to be in consistence with the operator reliability model constructed before.

392

Toward Modelling of Human Performance of Infrastructure Systems

Cen Nan (a,c) and Wolfgang Kröger (b)

a) Reliability and Risk Engineering Group (RRE), ETH Zürich, Switzerland, b) ETH Risk Center, ETH Zürich, Switzerland, c) Land Using Engineering Group (LUE), ETH Zürich, Switzerland

During the last decade, research works related to modelling and simulation of infrastructure systems have primarily focused on the performance of their technical components, almost ignoring the importance of non-technical components of these systems, e.g., human operators, users. In contrast, the human operator of infrastructure systems has become an essential part for not just maintaining daily operation, but also ensuring the security and reliability of the system. Therefore, developing a modeling framework that is capable of analyzing the human performance in a comprehensive way has become crucial. The respective framework, proposed in this paper, is generic and consists of two parts: an analytical method based on the Cognitive Reliability Error Analysis Method (CREAM) for human performance assessment and an Agent-based Modeling (ABM) approach for the representation of human behaviors. This framework is a pilot work exploring possibilities of simulating human operators of infrastructure systems through advanced modeling approaches. The demonstration of the applicability of this framework using the SCADA (Supervisory Control and Data Acquisition) system as an exemplary system is also presented.

W04 Marine Engineering

10:30 Waialua

Chair: Arsham Mazaheri, Aalto University

26

A Bayesian Network Model for Accidental Oil Outflow in Double Hull Oil Product Tanker Collisions

Floris Goerlandt and Jakub Montewka

Aalto University, Department of Applied Mechanics, Marine Technology,Research Group on Maritime Risk and Safety, P.O. Box 15300, FI-00076 AALTO, Finland

This paper proposes a Bayesian belief network (BBN) model for the estimation of accidental oil outflow in a ship-ship collision where a product tanker is struck. The intended application area for this model is maritime traffic risk assessment, i.e. in a setting in which the uncertainty regarding the specific vessel characteristics is high. The BBN combines a model for linking relevant variables of the impact scenario to the damage extent with a model for estimating the tank layouts based on limited information regarding the ship, as typically available from data from the Automatic Information System (AIS). The damage extent model, formulated as a logistic regression model and based on a mechanical engineering model for the coupled inner-outer dynamics problem of two colliding ships, is implemented in a discretized version in the BBN. The model for estimating the tank layout is applied for a representative set of product tankers typically operating in the Baltic Sea area. The methodology for constructing the BBN is discussed and results are shown.

37

Ship Grounding Damage Estimation Using Statistical Models

Otto-Ville Sormunen

Aalto University, Department of Applied Mechanics, Marine Technology, Research Group on Maritime Risk and Safety, Espoo, Finland

This paper presents a generalizable and computationally fast method of estimating maximum grounding damage extent in case of grounding based on damage statistics of groundings in Finnish waters. The damage is measured in relative maximum damage depth into the bottom structure, total damage length as well as the damage two-dimensional area.

61

Effects of the Background and Experience on the Experts’ Judgments through Knowledge Extraction from Accident Reports

Noora Hyttinen (a), Arsham Mazaheri (b), and Pentti Kujala (c)

a) Aalto University, Department of Applied Mathematics, School of Science, Espoo, Finland, b) Aalto University, Department of Applied Mechanics, School of Engineering, Espoo, Finland Kotka Maritime ResearchCenter (Merikotka), Kotka, Finland, c) Aalto University, Department of Applied Mechanics, School of Engineering, Espoo, Finland

Available risk models for maritime risk analysis are not proper enough for risk management purposes as they are not evidence-based. One of the sources of evidence that can be used for accident modeling is the accident reports. The reports need to be reviewed to extract the presented knowledge. This study investigates how the differences in the background and expertise of the reviewers can affect the extracted knowledge from the accident reports. The study is conducted by utilizing three-round Delphi method and using two test groups as researchers and mariners to review four grounding accident reports prepared by Finnish Accident Investigation Board. The results of the study show that although neither of the groups have superiority over the other with regard to the extracted knowledge, there are some categories that are chosen more frequently by specific group. Mariners chose more often the causes related to navigation and the actions of the crew, while the researchers tend to see more organizational and environmental related causes. Thus, the background of the reviewer should be considered in evidence-based modeling, as it affects the resulting models and thus the implementing risk control options suggested by the constructed models.

327

A Study for Adapting a Human Reliability Analysis Technique to Marine Accidents

Kenji Yoshimura (a), Takahiro Takemoto (b), Shin Murata (c), and Nobuo Mitomo (d)

a) National Maritime Research Institute, Mitaka, Japan, b) Tokyo University of Marine Science and Technology, Tokyo, Japan, c) National Institute for Sea Training, Yokohama, Japan, d) Nihon University,Funabashi, Japan

The deck officer who has the duty of navigation and keeping watch on a ship’s bridge is known as the officer of the watch (OOW). The OOW is a qualified and capable person with knowledge of ship navigation. According to the Japan Marine Accidents Inquiry Agency, however, "inadequate lookout" is the cause of 84% of collision accidents. In 41% of accidents, "the OOW couldn’t find the target until collision," and in 32% of collisions, "even though the OOW had found the target, they didn’t maintain a proper lookout." Many of the causes behind accidents pertain to not only the OOW’s knowledge and capability, but background factors. The Japan Transport Safety Board (JTSB) has been established in order to prevent recurrences and to mitigate damages caused by accidents. The JTSB considers introducing analysis method with objective/scientific processes. The cognitive reliability and error analysis method (CREAM) is a technique for analysing human reliability. CREAM organizes interactions between humans and the environment using the man-technology-organization triad. CREAM defines common performance conditions (CPC), the dependencies between them, and the links between antecedents and consequences to clarify the background factors that affect human performance. This method has mainly been used in the nuclear industry. When analysing the causes of accidents, it is necessary to clarify how much influence conditions have on human performance and the dependencies between CPCs. Since these conditions change across domains, the CPCs will apply differently to domains other than the nuclear industry. For example, in comparing the nuclear industry and the maritime industry, there are significant differences in the influence the work environment has on behaviour and human performance. Therefore, the dependencies between CPCs and priority are now evaluated according to the expert judgment of each domain. To facilitate simple and objective analysis, the CPCs and the dependencies, and the links need to be fitted to each domain. From a point of view described above, we first proposed CPCs adapted to maritime collision accidents. Secondly, we administered a questionnaire to OOWs for the purpose of quantifying the dependencies between CPCs and priorities. Thirdly, we conducted a retrospective analysis of the marine accident report to characterize the links between antecedents and consequences. Though our research is ongoing, we have reached certain conclusions. We herein provide an outline of our findings and the results of the questionnaire survey. We also specifically discuss the dependencies between the CPCs and priorities that were adapted to maritime collision accidents.

364

Quantifying the Effect of Noise, Vibration and Motion on Human Performance in Ship Collision and Grounding Risk Assessment

Jakub Montewka, Floris Goerlandt (a), Gemma Innes-Jones, Douglas Owen (b), Yasmine Hifi (c), Markus Porthin (d)

a) Aalto University, Department of Applied Mechanics, Marine Technology, Research Group on Maritime Risk and Safety, Espoo, Finland, b) – Lloyd’s Register, EMEA, Bristol, UK, c) – Brookes Bell R&D,Glasgow, UK, d) -VTT Technical Research Centre of Finland, Espoo, Finland

Risk-based design (RBD) methodology for ships is a relatively new and a fast developing discipline. However, quantification of human error contribution to the risk of collision or grounding within RBD has not been considered before. This paper introduces probabilistic models linking the effect of ship motion, vibration and noise with risk through the mediating agent of a crewmember. The models utilize the concept of Attention Management, which combines the theories described by Dynamic Adaptability Model, Cognitive Control Model and Malleable Attentional Resources Theory. To model the risk, an uncertainty-based approach is taken, under which the available background knowledge is systematically translated into a coherent network and the evidential uncertainty is qualitatively assessed. The obtained results are promising as the models are responsive to changes in the GDF nodes as expected. The models may be used as intended by naval architects and vessel designers, to facilitate risk-based ship design.

W05 Uncertainty, Sensitivity, and Bayesian Methods I

10:30 Wai'anae

Chair: David Esh, US Nuclear Regulatory Commission

10

Further Development of the GRS Common Cause Failure Quantification Method

Jan Stiller, Albert Kreuser, Claus Verstegen

Gesellschaft für Anlagen-und Reaktorsicherheit mbH (GRS), Cologne, Germany

For the quantification of common cause failures (CCF), GRS has developed the coupling model. This model has two important features: Firstly, estimation uncertainties which arise from different sources, e.g. statistical uncertainties, uncertainties of expert judgments or uncertainties due to inhomogeneities of statistical populations, are taken into account in a consistent way. Secondly, it automatically allows for the extrapolation of CCF events two groups of different sizes (“mapping”). This feature has been very important since for most component types groups of several different sizes can be found in German NPP. The model assumptions necessary to allow for this feature, however, also lead to undesirable convergence properties when a large amount of operating experience is available. Therefore GRS has started a project to research possible improvements of CCF modeling with respect to this aspect including the development of models that avoid making use of the restrictive modeling assumptions, which allow a comprehensive treatment of uncertainties, and which are applicable to data available in the German CCF data pool which does not contain information on single failures. Two different models have been developed, including a conservative mapping procedure. Comparisons of the results of the new estimation procedures and the coupling model show that the results are compatible.

75

Plant-Specific Uncertainty Analysis for a Severe Accident Pressure Load Leading to a Late Containment Failure

S.Y. Park and K.I. Ahn

Korea Atomic Energy Research Institute, Daejeon, KOREA

Typical containment performance analyses for a level 2 probabilistic safety analysis (PSA) have made use of a containment event tree (CET) modeling approach, to model the containment responses by depicting the various phenomenological processes, containment conditions, and containment failure modes that can occur during severe accidents. A general approach in the quantification of the containment event tree is to use a decomposition event tree (DET) to allow a more detailed treatment of the top event. A quantification of the physical phenomena in the decomposition event tree is achieved based on the results obtained through validated code calculations or expert judgments. The phenomenological modeling in the event tree still entails a high level of uncertainty because of our incomplete understanding of reactor systems and severe accident phenomena. This paper includes an uncertainty analysis of a containment pressure behavior during severe accidents for the optimum assessment of a late containment failure model of a decomposition event tree.

87

Comparison of Uncertainty and Sensitivity Analyses Methods Under Different Noise Levels

David Esh and Christopher Grossman

US Nuclear Regulatory Commission, Washington, DC, USA

Uncertainty and sensitivity analyses are an integral part of probabilistic assessment methods used to evaluate the safety of a variety of different systems. In many cases the systems are complex, information is sparse, and resources are limited. Models are used to represent and analyze the systems. To incorporate uncertainty, the developed models are commonly probabilistic. Uncertainty and sensitivity analyses are used to focus iterative model development activities, facilitate regulatory review of the model, and enhance interpretation of the model results. A large variety of uncertainty and sensitivity analyses techniques have been developed as modeling has advanced and become more prevalent. This paper compares the practical performance of six different uncertainty and sensitivity analyses techniques over ten different test functions under different noise levels. In addition, insights from two real-world examples are developed.

115

Understanding Relative Risk: An Analysis of Uncertainty and Time at Risk

A. El-Shanawany (a,b)

a) Imperial College London, London, United Kingdom, b) Corporate Risk Associates, London, United Kingdom

Risk at nuclear facilities in the UK is managed through a combination of the ALARP principle (As Low As Reasonably Practicable), and numerical targets. The baseline risk of a plant is calculated through the use of Probabilistic Safety Assessment (PSA) models, which are also used to estimate the risk in various plant states, including maintenance states. Taking safety equipment out of service for maintenance yields a temporary increase in risk. Software tools such as RiskWatcher can be used to monitor the real time level of risk at plant. In combination with software tools to estimate the instantaneous risk, time at risk arguments are frequently employed to justify safety during plant modifications or maintenance activities. In this paper we consider the effect of using conservative estimates for the probability of failure on demand of safety critical components compared to using a full uncertainty distribution. It is found that conservatism in the base case model translates to a hidden optimism when used in time at risk arguments. While it is known and accepted that quantified risks are necessarily approximate, useful insights can be gained through risk modelling by considering relative risks. Anything that distorts relative risks impacts on the usefulness of the risk modelling. The important point of the effect discussed here is that it has the potential to distort relative risks. The mapping between the base case conservatism and the time at risk optimism is characterised, and the effect is illustrated using simple hypothetical examples. These simple examples show that the shape of the full uncertainty distributions of model parameters have important and direct consequences for time at risk arguments, and must be considered in order to avoid distorting the risk profile.

W06 Aging Management Issues for Nuclear (Spent) Fuel and HLW Transport and Storage

10:30 Ewa

Chair: Dietmar Wolff, BAM Federal Institute for Materials Research and Testing

257

Understanding the Long-term Behavior of Sealing Systems and Neutron Shielding Material for Extended Dry Cask Storage

Dietmar Wolff, Matthias Jaunich, Ulrich Probst, and Sven Nagelschmidt

Federal Institute for Materials Research and Testing (BAM), Berlin, Germany

In Germany, the concept of dry interim storage in dual purpose metal casks before disposal is being pursued for spent nuclear fuel (SF) and high active waste (HAW) management. However, since there is no repository available today, the initially planned and established dry interim storage license duration of up to 40 years will be too short and its extension will become necessary. For such a storage license extension it is required to assess the long-term performance of SF and all safety related storage system components in order to confirm the viability of extended storage. The main safety relevant components are the thick-walled dual purpose metal casks. These casks consist of a monolithic cask body with integrated neutron shielding components (polymers, e.g. polyethylene) and a monitored double lid barrier system with metal and elastomeric seals. The metal seals of this bolted closure system guarantee the required leak-tightness whereas the elastomeric seals allow for leakage rate measurement of the metal seals. This paper presents an update on running long-term tests on metal seals at different temperatures under static conditions over longer periods of time. In addition, first results of our approach to understand the aging behavior of different elastomeric seals and neutron radiation shielding material polyethylene are discussed.

259

Gap Analysis Examples from Periodical Reviews of Transport Package Design Safety Reports of SNF/HLW Dual Purpose Casks

Steffen Komann, Frank Wille, Bernhard Droste

Federal Institute for Materials Research and Testing (BAM), Berlin, Germany

Storage of spent nuclear fuel and high-level waste in dual purpose casks (DPC) is related with the challenge of maintaining safety for transportation over several decades of storage. Beside consideration of aging mechanisms by appropriate design, material selection and operational controls to assure technical reliability by aging management measures, an essential issue is the continuous control and update of the DPC safety case. Not only the technical objects are subject of aging but also the safety demonstration basis is subject of “aging” due to possible changes of regulations, standards and scientific/technical knowledge. The basic document, defining the transport safety conditions, is the package design safety report (PDSR) for the transport version of the DPC. To ensure a safe transport in future to a destination which is not known yet (because of not yet existing repository sites) periodical reviews of the PDSR, in connection with periodic renewals of package design approval certificates, have to be carried out. The main reviewing tool is a gap analysis. A gap analysis for a PDSR is the assessment of the state of technical knowledge, standards and regulations regarding safety functions of structures, systems and components.

417

The Evolution of Safety Related Parameters and their Influence on Long-Term Dry Cask Storage

Klemens Hummelsheim (a), Jörn Stewering (b), Sven Keßen and Florian Rowold (a)

a) Gesellschaft für Anlagen und Reaktorsicherheit (GRS) mbH, Garching, Germany, b) Gesellschaft für Anlagen und Reaktorsicherheit (GRS) mbH, Cologne, Germany

For spent nuclear fuel management in Germany, the concept of dry interim storage in dual purpose casks prior to direct disposal is applied. Due to current delay in site selection and exploration, the necessity of an extension of the storage period beyond the granted license time for 40 years seems inevitable. Compliance with safety requirements under consideration of aging effects, in particular safe confinement, radiation shielding, subcriticality and decay heat removal will be crucial for the extension of these operation licenses. Thermal loads, mechanical stresses, gamma and neutron radiation are considered to be the main contributors to aging and degradation effects of the fuel, its cladding and the cask over the period of long term storage including subsequent transport. In order to assess the long term safety of such a system, knowledge about the evolution of the influencing variables is required. The paper at hand describes numerical investigations in the field of spent fuel long-term behavior, e.g. fuel clad temperature and hoop stress over a time period of 100 years. Analytical storage temperature and stress calculations for a generic cask with different burnups and loading patterns of UO2 and MOX fuel will be presented. The gained results will be related to actual questions regarding long-term degradation effects. Furthermore, shielding analyses with regard to varying densities of the integrated neutron moderator of the cask will be discussed.

542

Aging Management of Dual-Purpose Casks on the Example of CASTOR® KNK

Iris Graffunder (a), Ralf Schneider-Eickhoff and Rainer Nöring (b)

a) EWN Energiewerke Nord GmbH, Lubmin, Germany, b) GNS Gesellschaft für Nuklear-Service mbH, Essen, Germany

In 2010 the spent fuel of the German prototype fast breeder reactor KNK was returned from France to Germany. For the return and the interim storage 4 transport and storage casks of the type CASTOR® KNK were designed and fabricated by GNS Gesellschaft fü r Nuklear-Service mbH. The casks were transported to Germany in December 2010 and stored in the interim storage facility ZLN operated by Energiewerke Nord GmbH (EWN). Due to there dual-purpose all CASTOR® casks have to fulfill the requirements of both fields of operation – transport and storage. After a minimum storage period of 40 years, a last transport to the final repository has to be carried out with the same requirements as for new casks. To be sure that the cask can be transported after the storage period the authorities require the renewal of the package design approval, normally each 5 or 10 years. In case of CASTOR® KNK the approval expires at October 2014. EWN and GNS are planning an extension of the validity period of the package design approval to 10 years. For this purpose an aging management report is necessary considering all stress factors, which are crucial for the rate of aging: Radiation, thermal and mechanical loads and corrosion.

W07 Dynamic Reliability II

10:30 Kona

Chair: Diego Mandelli, Idaho National Laboratory

513

Overview of New Tools to Perform Safety Analysis: BWR Station Black Out Test Case

D. Mandelli, C. Smith (a), T. Riley (c), J. Nielsen, J. Schroeder, C. Rabiti, A. Alfonsi, J. Cogliati, R. Kinoshita (a), V. Pascucci, B. Wang, D. Maljovec (b)

a) Idaho National Laboratory, Idaho Falls (ID), USA, b) University of Utah, Salt Lake City (UT), USA, c) Oregon State University, Corvallis (OR), USA

The existing fleet of nuclear power plants is in the process of extending its lifetime and increasing the power generated from these plants via power uprates. In order to evaluate the impacts of these two factors on the safety of the plant, the Risk Informed Safety Margin Characterization project aims to provide insights to decision makers through a series of simulations of the plant dynamics for different initial conditions (e.g., probabilistic analysis and uncertainty quantification). This paper focuses on the impacts of power uprate on the safety margin of a boiling water reactor for a station black-out event. Analysis is performed by using a combination of thermal-hydraulic codes and a stochastic analysis tool currently under development at the Idaho National Laboratory, i.e. RAVEN. We employed both classical statistical tools, i.e. Monte-Carlo, and more advanced machine learning based algorithms to perform uncertainty quantification in order to quantify changes in system performance and limitations as a consequence of power uprate. We also employed advanced data analysis and visualization tools that helped us to correlate simulation outcomes such as maximum core temperature with a set of input uncertain parameters. Results obtained give a detailed investigation of the issues associated with a plant power uprate including the effects of station black-out accident scenarios. We were able to quantify how the timing of specific events was impacted by a higher nominal reactor core power. Such safety insights can provide useful information to the decision makers to perform riskinformed margins management.

80

Simulation Methods to Assess Long-Term Hurricane Impacts to U.S. Power Systems

Andrea Staid, Seth D. Guikema (a), Roshanak Nateghi (a,b), Steven M. Quiring (c), and Michael Z. Gao (a)

a) Johns Hopkins University, Baltimore, MD USA, b) Resources for the Future, Washington, DC USA, c) Texas A&M University, College Station, TX USA

Hurricanes have been the cause of extensive damage to infrastructure, massive financial losses, and displaced communities in many regions of the United States throughout history. The electric power distribution system is particularly vulnerable; power outages and related damages have to be repaired quickly, forcing utility companies to spend significant amounts of time and resources during and after each storm. Being able to anticipate outcomes with some degree of certainty allows for those affected to plan ahead, thus minimizing potential losses. This is true for both very short and very long time scales. In the context of hurricanes, utility companies try to correctly anticipate power outages and bring in the repair crews necessary to quickly and efficiently restore power to their customers. A similar type of planning can be applied to a long time horizon when making decisions on investments to improve grid reliability, resilience, and robustness. We present a methodology for assessing long-term risks to the power system while also incorporating possible changes in storm behavior as a result of climate change. We describe our simulation methodology and demonstrate the assessments for regions lying along the Gulf and Atlantic Coasts of the United States.

238

Towards Reliability Evaluation of AFDX Avionic Communication Systems With Rare-Event Simulation

Armin Zimmermann, Sven Jäger (a), and Fabien Geyer (b)

a) Software and Systems Engineering, Ilmenau University of Technology; Ilmenau, Germany, b) Airbus Group Innovations, Dept. TX4CP; Munich, Germany

Reliability is a major concern for avionic systems. The risks in their design can be minimized by using model-based systems engineering methods including simulation and mathematical analysis. However, there are non-functional properties that are computationally expensive to evaluate, for instance when rare events are important. Rare-event simulation methods such as RESTART can be used, leading to speedups of several orders of magnitude. We consider AFDX (avionic full-duplex switched Ethernet) networks as an application example here, where the end-to-end delay and buffer utilizations are important for a safe and efficient system design. The paper proposes generic model patterns for AFDX networks, and shows how very low probabilities can be computed in acceptable time with the presented method and software tool.

486

Extension of DMCI to Heterogeneous Infrastructures: Model and Pilot Application

Paolo Trucco, Massimiliano De Ambroggi, Pablo Fernandez Campos (a), Ivano Azzini, and Georgios Giannopoulos (b)

a) Department of Management, Economics and Industrial Engineering, Politecnico di Milano, Milan, Italy, b) European Commission -DG Joint Research Centre (JRC), Ispra, Italy

Since the adequate functioning of critical infrastructures is crucially sustaining societal and economic development, the understanding and assessment of their vulnerability and interdependency become more and more important for improving resilience at system level. The paper proposes an extension of DMCI (Dynamic Functional Modelling of vulnerability and interoperability of CIs) to modelling the vulnerability and interdependencies of heterogeneous infrastructures, i.e. the interactions between electric power infrastructure and the transport infrastructure system have been modelled. The simulation tool has been implemented with the Matlab platform Simulink in order to overcome some computational limitations, that affect the first DMCI version implemented in Matlab, in quantifying the propagation of inoperability and logical interdependencies related to demand shift, and to obtain a modular and user friendly solution, even for users who are not expert at simulation. The new DMCI model has been tested with a pilot application that comprised more than 200 vulnerable nodes and covered both power transmission grid and transportation systems of the province of Milan (Italy). The most vital and vulnerable nodes have been identified under different blackout scenarios, for which specific data on vulnerable nodes has been collected directly from the operators.

532

A Longitudinal Analysis of the Drivers of Power Outages During Hurricanes: A Case Study with Hurricane Isaac

Gina Tonn, Seth Guikema (a), Celso Ferreira (b), and Steven Quiring (c)

a) Johns Hopkins University, Baltimore, MD, US, b) George Mason University, Fairfax, VA, US, c) Texas A&M University, College Station, TX

In August 2012, Hurricane Isaac, a Category 1 hurricane at landfall, caused extensive power outages in Louisiana. The storm brought high winds, storm surge and flooding to Louisiana, and power outages were widespread and prolonged. Hourly power outage data for the state of Louisiana was collected during the storm and analyzed. This analysis included correlation of hourly power outage figures by zip code with wind, rainfall, and storm surge using a non-parametric ensemble data mining approach. Results were analyzed to understand how drivers for power outages differed geographically within the state. This analysis provided insight on how rainfall and storm surge, along with wind, contribute to power outages in hurricanes. By conducting a longitudinal study of outages at the zip code level, we were able to gain insight into the causal drivers of power outages during hurricanes. The results of this analysis can be used to better understand hurricane power outage risk and better prepare for future storms. It will also be used to improve the accuracy and robustness of a power outage forecasting model developed at Johns Hopkins University.

W08 Special Session

12:00

Chair: John O'Donnell

SPECIAL SESSION

W11 Digital I&C and Software Reliability II

1:30 PM Honolulu

Chair: Robert Enzinna, AREVA Inc.

148

Experimental Approach to Evaluate the Reliability of Digital I&C Systems in Nuclear Power Plants

Seung Jun Lee (a), Man Cheol Kim (b), and Wondea Jung (a)

a) Korea Atomic Energy Research Institute, Daejeon, Korea, b) Chung-ang University, Seoul, Korea

Owing to the unique characteristics of digital instrumentation and control (I&C) systems, the reliability analysis of digital systems has become an important element of probabilistic safety assessments. In this work, an experimental approach to estimate the reliability of digital I&C systems is considered. A digitalized reactor protection system was analyzed in detail, and the system behavior was observed when a fault was injected into the system using a software-implemented fault injection technique. Based on the analysis of the experimental results, it is possible to not only evaluate the system reliability but also identify weak points of fault-tolerant techniques by identifying undetected faults. The results can be reflected in designs to improve the capability of fault-tolerant techniques.

170

The Contribution to Safety of a Diverse Backup System for Digital Safety I&C Systems in Nuclear Power Plants, a Probabilistic Approach

W. Postma, J.L. Brinkman

NRG, Arnhem, the Netherlands

NRG performed a research project on the influence on safety of diverse backup systems next to the existing digital I&C safety systems in Nuclear Power Plants (NPPs). As part of this research project a probabilistic approach has been used to evaluate the basic options to connect a diverse backup system logically to the existing digital I&C systems. One can distinguish four different basic design options: (1) no backup system is used; (2) the backup system is used only if the digital system has failed, the switch-over to the backup system is automatic; (3) the backup system is used only if the digital system has failed, the switch-over to the backup system is manual; (4) the backup system is in continuous operation, with an equal vote as the digital system. Design (2) and (4) have been modeled and compared with the situation without backup system (design 1). This paper will discuss the model and the results, including the sensitivity analyses, in order to reflect on the probabilistic impact of a diverse backup system.

196

Modeling of Digital I&C and Software Common Cause Failures: Lessons Learned from PSAs of TELEPERM® XS-Based Protection System Applications

Robert S Enzinna (a), Mariana Jockenhoevel-Barttfeld, Yousef Abusharkh (b), and Herve Bruneliere (c)

a) AREVA Inc. Lynchburg, VA, USA, b) AREVA GmbH, Erlangen, Germany, c) AREVA SAS, Paris, France

The authors have created probabilistic safety assessment (PSA) models of TELEPERM® XS (TXS)-based digital protection systems for a variety of nuclear power plant applications in the USA and around the world. This includes PSA models for digital protection system upgrades, and protection systems for new reactor builds. The PSA models have involved detailed digital instrumentation and control (I&C) fault tree models that have been fully integrated with the full plant PSA model. This paper discusses lessons learned, insights, and modeling recommendations gleaned from this experience. The paper discusses recommended level of modeling detail, development of failure rate and fault coverage data, treatment of fault tolerant design features and common cause failure (CCF) defenses, fault tree modularization/simplification, and other topics of interest. Practical suggestions for PSA modeling are made based on experience gained from actual digital I&C PSA models built for several internal and external customers. Modeling of CCF for the TXS hardware modules and for the software is highlighted, especially focusing on the quantification of software common cause failures (SWCCF). The authors describe the methodology used for quantification of SWCCF in the PSA studies, the definition of realistic software CCF modes, and estimation of failure probabilities.

283

Methodology for Safety Assessment of the Defense-in Depth and Diversity Concept of the Digital I&C by Modernization of an NPP in Finland

Ewgenij Piljugin (a), Jarmo Korhonen (b)

a) Gesellschaft fuer Anlagen und Reaktorsicherheit (GRS) mbH, Garching, Germany, b) Fortum, Power and Heat, Helsinki, Finland

A new automation concept based on digital instrumentation and control (I&C) systems will be implemented in the Loviisa NPP plant in Finland within a modernization project. The new automation concept was developed by Fortum under consideration of the Defense-in-Depth and Diversity (3D) strategy. Different methodologies are used in several tasks of the design verification such as safety evaluation of the I&C functions, failure mode and effect analysis (FMEA) for identifying the relevant failure modes of the I&C hardware. The results of the analysis present generic and design specific issues. The generic issues primarily concern methodological aspects and design specific issues concern identifying failure modes of the I&C equipment, evaluation of the failures effects and the propagation paths, identification of candidates for common cause failure analysis (CCF), and identification of appropriate countermeasures to prevent or mitigate hazardous failure effects. This paper presents some selected insights from the evaluation of the safety significant aspects of the reliability of the new digital I&C systems and discusses the results of V&V tasks from a methodological point of view. The identified issues should also support consideration of safety relevant aspects of digital safety important I&C systems in the probabilistic reliability analysis (PRA) of modernized nuclear power plants.

W12 Safety Assessment Software and Tools I

1:30 PM Kahuku

Chair: Daniel Clayton, Sandia National Laboratories

31

Scrum, Documentation and the IEC 61508-3:2010 Software Standard

Thor Myklebust(a), Tor Stålhane (b), Geir Kjetil Hanssen (a), Tormod Wien (c) and Børge Haugset (a)

a) SINTEF ICT, b) IDI NTNU, c) ABB

Agile development, and especially Scrum, has gained increasing popularity. IEC 61508 and several related standards for development of safety critical software has a strong focus on documentation, including planning, which shall show that all required activities have been performed. Agile development on the other hand, has as one of its explicit goals to reduce the amount of documentation and to mainly produce and maintain working software. The problem created by the need to develop a large amount of documents when developing safety critical systems is, however, not a problem just for agile development – it has been identified as a problem for all development of safety critical software. In some cases up to 50% of all project resources has been spent on activities related to the development, maintenance and administration of documents. Thus, a way to reduce the amount of documentation will benefit all developers of safety critical systems. By going systematically through all the documentation requirements in IEC 61508-1 (general documentation requirements) and IEC 61508-3 (software requirements) and by using the combined expertise of the five authors, we have been able to identify documents that are or can be generated by tools used in the requirement and development process, e.g. logs from requirement and testing tools and documents that can be made as part of the planning and discussions, e.g. snap shots of whiteboards. We have also identified documents that normally can be reused when issuing a new version of the software and identified documents that can be combined into one document

378

A Software Package for the Assessment of Proliferation Resistance of Nuclear Energy Systems

Zachary Jankovsky, Tunc Aldemir, Richard Denning (a), Lap-Yan Cheng and Meng Yue (b)

a) The Ohio State University, Columbus, Ohio, USA, b) Brookhaven National Laboratory, Uptown, New York, USA

In order to better safeguard nuclear material from diversion by a malicious actor, it is important to search the input parameter space to gauge the attractiveness of various strategies that could be employed by such an actor. The ability to create and cluster a large number of scenarios based on similarity allows for a more complete and faster investigation of this parameter space. The software tool PRCALC was developed by the Brookhaven National Laboratory to estimate the various measures for covert diversion of nuclear material from a hypothetical fuel cycle system. The software package OSUPR (Ohio State University Proliferation Resistance) was written to extend PRCALC’s abilities to allow for the creation of many scenarios at a time, as well as to take advantage of multiple processing threads in the computation of proliferation resistance measures. OSUPR also allows for clustering of the outputs of PRCALC using three methods: mean-shift, k-means, and adaptive mean-shift. The clustered results can yield insights to vulnerable aspects of the fuel system.

241

Risk Estimation Methodology for Launch Accidents

Daniel J. Clayton, Ronald J. Lipinski (a), and Ryan D. Bechtel (b)

a) Sandia National Laboratories, Albuquerque, NM, USA, b) Office of Space and Defense Power Systems, U.S. Department of Energy, Germantown, MD, USA

As compact and light weight power sources with reliable, long lives, Radioisotope Power Systems (RPSs) have made space missions to explore the solar system possible. Due to the hazardous material that can be released during a launch accident, the potential health risk of an accident must be quantified, so that appropriate launch approval decisions can be made. One part of the risk estimation involves modeling the response of the RPS to potential accident environments. Due to the complexity of modeling the full RPS response deterministically on dynamic variables, the evaluation is performed in a stochastic manner with a Monte Carlo simulation. The potential consequences can be determined by modeling the transport of the hazardous material in the environment and in human biological pathways. The consequence analysis results are summed and weighted by appropriate likelihood values to give a collection of probabilistic results for the estimation of the potential health risk. This information is used to guide RPS designs, spacecraft designs, mission architecture, or launch procedures to potentially reduce the risk, as well as to inform decision makers of the potential health risks resulting from the use of RPSs for space missions.

296

Development of Online Reliability Monitors Software for Component Cooling Water System in Nuclear Power Plant

Yunli Deng, He Wang, Biao Guo

Fundamental Science on Nuclear Safety and Simulation Technology Laboratory, College of Nuclear Science and Technology, Harbin Engineering University, Harbin, P.R. China

The online risk monitoring system(OLRS) of Nuclear power plants(NPP) composed of digital instrument and control system by the way of data unilateral transmission. It automatically obtains the actual status of system and components to determine the instantaneous risk on time and used by the plant staff in support of operational decisions. During normal operation of NPP, the safety systems work continuously for the long period of time and also there are regular equipment alteration and maintenance activities. Therefore, the high reliability of equipment or safety system is important in the aspect of safe operation of NPP. The objectives of this paper are to introduce software in order to develop the online reliability monitor of NPP safety system. The Component Cooling Water System (CCWS) has been considered as an example of safety system in present study. The online reliability monitor software of CCWS is developed by using the Visual Basic 6.0 as an application development platform and Microsoft SQL Server 2000 taken as database environment. In present study, it has been shown from the verification of Qinshan Master-Control Room simulator that the system achieves the function to monitor the reliability of critical components of CCWS.

340

A Parallel Manipulation Method for Zero-suppressed Binary Decision Diagram

Jin Wang, Shanqi Chen, Liqin Hu, Rongxiang Hu, Fang Wang, FDS Team

Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei Anhui, China

A parallel algorithm for the manipulation of Zero-suppressed Binary Decision Diagrams (ZBDDs) on a shared memory multi-processor system was described. Theoretical analysis showed that parallel manipulation of ZBDD has a better time performance than the sequential operation of ZBDD. Since the parallel ZBDD algorithm uses much less time than the ordinary ZBDD algorithm, a real-time calculation can be done in the risk monitoring of a nuclear power plant, which would do a favor of accelerating emergency response and improving safety of nuclear power plants.

W13 External Events Hazard/PRA Modeling for Nuclear Power Plants II

1:30 PM O'ahu

Chair: In-Kil CHOI, Korea Atomic Energy Research Institute

89

Realistic Modelling of External Flooding Scenarios - A Multi-Disciplinary Approach

J. L. Brinkman

NRG, Arnhem, The Netherlands

Extreme phenomena, such as storm surges or high river water levels, may endanger the safety of nuclear power plants (NPPs) by inundation of the plant site with subsequent damage on safety-related buildings. Flooding may result in simultaneous failures of safety-related components, such as service water pumps and electrical equipment. In addition, the accessibility of the plant may be impeded due to flooding the plant environment. Therefore, (re)assessments of flood risk and flood protection measures should be based on accurate state-of-the-art methods. The Dutch nuclear regulations require that a nuclear power plant shall withstand all external initiating events with a return period not exceeding one million years. For external flooding, this requirement is the basis of the so-called nuclear design level (Nucleair Ontwerp Peil, NOP) of the buildings, i.e. the water level at which a system – among others, the nuclear island and the ultimate heat sink – should still function properly. In determining the NOP, the mean water level, wave height and wave behaviour during storm surges are taken into account. This concept could also be used to implement external flooding in a PSA, by assuming that floods exceeding NOP levels directly lead to core damage. However, this straightforward modelling ignores some important aspects: the first is the mitigative effect of the external flood protection as dikes or dunes; the second aspect is that although water levels lower than NOP will not directly lead to core damage, they could do so indirectly as a result of combinations of system loss by flooding and random failure of required safety systems to bring the plant in a safe, stable state. A third aspect is time: failure mechanisms need time to develop and time (via duration of the flood) determines the amount of water on site. This paper describes a PSA approach that takes the (structural) reliability of the external defences against flooding and timing of the events into account as basis for the development and screening of flooding scenarios.

149

Insights from the Analyses of Other External Hazards for Nuclear Power Plants

James C. Lin

ABSG Consulting Inc., Irvine, California, United States

Because the probable maximum events selected in FSAR for nuclear plants may not be the maximum possible events, they could possibly be exceeded by more severe events in the future. As such, there is a need to re-evaluate the other external hazards, especially those associated with the natural phenomena. To ensure that the maximum possible intensities of the natural phenomenon hazards are identified and analyzed, one has to be able to identify the physical limits of the parameters that define the intensities of the hazards. However, in some cases, it is truly difficult to identify the absolute, physical limits of parameters associated with selected natural hazards. One way to address the issue of exceeding the probable maximum event is to evaluate the quantitative risk in terms of core damage and large early release frequencies resulting from the specific hazard of concern. This will require the estimation of the hazard frequency. While it may be possible to assess the occurrence frequencies of selected natural phenomena of limited intensity, the uncertainty in the assessed frequencies of events with magnitude beyond the range of historical occurrences may be uncomfortably high. Furthermore, some of the external hazards may not lend themselves to an easy assessment of their occurrence frequencies. As such, deterministic criteria will still need to be used for the risk evaluation of selected hazard events. This paper groups the entire set of other external hazards into a number of categories and discusses the characteristics, PRA evaluation methods, and other aspects of each of these groups.

245

The Next-Generation Risk Assessment Method About the Effect of a Slope And Foundation Ground on a Facility in a Nuclear Power Plant

Susumu Nakamura (a), Ikumasa Yoshida (b), Masahiro Shinoda (c), Tadasi Kawai (d), Hidetaka Nakamura (e), and Masaaki Murata (f)

a) Dept. of Civil & Environmental Eng., College of Engineering, Nihon University, Koriyama, Japan, b) Tokyo City University, Tokyo, Japan, c) Railway technical research institute, Kunitachi, Japan, d) TohokuUniversity, Sendai, Japan, e) Japan nuclear regulation authority, Tokyo, Japan, f) Mitsubishi heavy industry, Takasago, Japan

From the background of the accident of the nuclear power plant caused by The 2011 off the Pacific coast of Tohoku Earthquake, the view about the effect of ground such as a slope and a foundation on the nuclear power plant in not only the regulatory guidance for seismic design but also the standard about seismic probabilistic safety assessment was also revised remarkably in JAPAN. A view of the limit state to evaluate the fragility curve about the effect of a slope failure on the facilities described in the latter standard was improved by geotechnical approach such as considering the dynamic behavior of geomaterials after collapse. The view should be called the next-generation assessment about slope stability. The limit state regarding on the slope failure on the facility was specified based on an experimental consideration. Here, the view is reported with experimental results obtained from shaking table tests and its numerical analysis. The experimental examples are also described to verify the effect of countermeasure against the seismic action exceeding the limit state. As a examples to evaluate the movement of rock block induced by slope collapse, the numerical method and its example of application were also described.

289

Probabilistic Tsunami Hazard Analysis for Nuclear Power Plants on the East Coast of Korean Peninsula

In-Kil Choia, Min Kyu Kim, Hyun-Me Rhee

Korea Atomic Energy Research Institute, Daejeon, Korea

On March 11, 2011, there was a tremendous earthquake and tsunami on the east coast of Japan. The earthquake and tsunami caused a severe accident at the Fukushima I NPP. Before the 2011 event, a tsunami was one of the many external events for a NPP, but after the Fukushima accident, a tsunami has become a very important external hazard that should be considered for the safety on NPP. After the Fukushima accident, many countries have attempted to develop a tsunami safety assessment method for nuclear power plants. To perform a tsunami safety assessment for a NPP, deterministic and probabilistic approaches can be applied. In this study, a probabilistic tsunami hazard analysis was performed for the east coast of Korea. There are three NPP sites located on the east coast of Korea. An empirical analysis and a numerical analysis were performed for an assessment of a tsunami hazard.

305

External Events PSA for the Spent Fuel Pool of the Paks NPP

Attila Bareith (a), Jozsef Elter (b), Zoltan Karsa, Tamas Siklossy (a)

a) NUBIKI Nuclear Safety Research Institute, Budapest, Hungary, b) Paks Nuclear Power Plant Ltd., Paks, Hungary

Originally, probabilistic safety assessment of external events was limited to the analysis of earthquakes for the Paks Nuclear Power Plant in Hungary. The level 1 PSA for external events other than earthquakes was completed in 2012 showing a significant contribution of wind and snow related failures to core damage risk. On the basis of the external events PSA for the reactor, a similar assessment was subsequently performed for a selected spent fuel pool of the Paks plant in 2013. The analysis proved to be a significant challenge due to scarcity of data, lack of knowledge, as well as limitations of existing PSA methodologies. This paper presents an overview of the external events PSA performed for the spent fuel pool of the Paks NPP. Important methodological aspects are summarized, which are relevant to the spent fuel pool external hazard PSA. Although some important challenges had already been experienced during the reactor PSA – that initiated follow-on analyses and developmental efforts –, the most important lessons and analysis areas that need further elaboration are summarized and highlighted in the example of the spent fuel pool PSA to ensure completeness in discussing key analysis findings and unresolved issues.

W14 Safety Management and Decision Making I

1:30 PM Waialua

Chair: Chris Everett, Information Systems Laboratories, Inc.

132

Ramifications of Modeling Impact On Regulatory Decision-making -A Practical Example

Ching Guey

Tennessee Valley Authority, Chattanooga, TN, U.S.A.

PRA models have been used for nuclear power plants in several areas including Maintenance Rule (MR) a(4), Reactor Oversight Process (ROP) and Mitigating System Performance Index (MSPI). As a part of the living PRA program, the PRA model has been updated to reflect operating experience, feedback from applications, and more recent PRA data and methodology. PRA modeling detail which can affect the regulatory decision-making of Emergency Diesel Generator (EDG) MSPI (highlighting key PRA assumptions and design basis requirement under multi-unit accidents) and ROP process are discussed. Risk insights on the key assumptions in both deterministic and PRA modeling which may affect the MSPI program and ROP process are presented. Areas of improvement to manage more effectively the living PRA program for regulatory decisionmaking as a result of the lessons learned from a practical example with several regulatory ramifications are summarized. These include the need of ralistic modeling of design features of interest, realistic success criteria for multi-unit accident scenarios and dependency treatment of human reliability analysis (HRA).

138

A Fresh Look at Barriers from Alternative Perspectives on Risk

Xue Yang, Stein Haugen

Norwegian University of Science and Technology, Trondheim, Norway

This paper takes a fresh look at alternative perspectives on major accident causation theories to highlight the fact that these perspectives can supplement and improve the energy barrier perspective. The paper starts from a literature study of energy barrier perspective, Man-Made Disaster theory (MMD), Conflicting Objective Perspective (COP), Normal Accident Theory (NAT), High Reliability Organization theory (HRO), Resilience Engineering (RE), and System-Theoretic Accident Model and Processes (STAMP) model to find out main concepts and identify critical factors. A further study of safety barrier perspective is carried out using STAMP methodology to understand how barrier functions can fail. It was found that alternative perspectives can supplement the barrier perspective by structurally analyzing possible failure causes for barrier function (STAMP, MMD), looking for driving forces for unsafe decisions and unsafe actions when human interacts or be part of barrier systems (COP, HRO), and emphasizing possible complex interactions and tight coupling within barrier functions (NAT, RE). Furthermore, suggestions to barrier management based on best practices from these perspectives are presented, which will be developed into concrete risk reduction measures, such as checklists, audits schemes, or indicators to help decision-makers better comprehend and maintain the performance of barrier functions in further work.

140

Monitoring Major Accident Risk in Offshore Oil and Gas Activities by Leading Indicators

Helene Kjær Thorsen (a) and Ove Njå (b)

a) Safetec Nordic AS, Oslo, Norway, b) University of Stavanger, Stavanger, Norway

In recent years, there has been a growing awareness that major accident risks should be monitored using risk indicators. We distinguish between leading and lagging indicators. The reason is that major accidents are rare events and the underlying causes are often fragmented and difficult to measure. However, it is a demanding task to develop appropriate leading indicators, because accident theories are disputed both in research literature and by practitioners. This paper presents the results from a study of a major oil and gas company’s risk management processes and its use of indicators related to offshore installations. The work is based on analyses of accident reports, a literature review and interviews with offshore installation managers and platform integrity personnel. We revealed major differences in attitudes among significant decision makers in relation to the use of risk indicators, spanning from skepticism and no use to in depth registration and analysis. However, all the offshore installation managers addressed the importance of a holistic view on risk and safety. Based on our findings we have developed an indicator set consisting of 16 leading indicators, covering technical, operational and organizational factors influencing major accident risk on offshore installations.

185

The Role of NASA Safety Thresholds and Goals in Achieving Adequate Safety

Homayoon Dezfuli (a), Chris Everett (b), Allan Benjamin (c), Bob Youngblood (d), and Martin Feather (e)

a) NASA, Washington, DC, USA, b) ISL, Rockville, MD, USA, c) Independent Consultant, Albuquerque, NM, USA, d) Idaho National Laboratory, Idaho Falls, ID, USA, e) Jet Propulsion Laboratory, Pasadena, CA,USA

NASA has recently instituted requirements for establishing Agency-level safety thresholds and goals that define long-term targeted and maximum tolerable levels of risk to the crew as guidance to developers in evaluating “how safe is safe enough” for a given type of mission. This paper discusses some key concepts regarding the role of the Agency’s safety thresholds and goals in achieving adequate safety, where adequate safety entails not only meeting a minimum tolerable level of safety (e.g., as determined from safety thresholds and goals), but being as safe as reasonably practicable (ASARP), regardless of how safe the system is in absolute terms. Safety thresholds and goals are discussed in the context of the Risk-Informed Safety Case (RISC): A structured argument, supported by a body of evidence, that provides a compelling, comprehensible and valid case that a system is or will be adequately safe for a given application in a given environment. In this context, meeting of safety thresholds and goals is one of a number of distinct safety objectives, and the system safety analysis provides evidence to substantiate claims about the system with respect to satisfaction of the thresholds and goals.

195

Improving Consistency Checks between Safety Concepts and View Based Architecture Design

Pablo Oliveira Antonino, Mario Trapp

Fraunhofer IESE, Kaiserslautern, Germany

Despite the early adoption of ISO 26262 by the automotive industry, managing functional safety in the early phases of system development remains a challenge. One key problem is how to efficiently keep safety assurance artifacts up-to-date considering the recurrent requirements changes during the system’s lifecycle. Here, there is a real demand for means to support the creation, modification, and reuse of safety assurance documents, like the Safety Concepts described in ISO 26262. One major aspect of this challenge is inconsistency between safety concepts and system architecture. Usually created by different teams at different times and in different contexts of the development environment, these artifacts are often completely disassociated. This becomes even more evident when system maintenance is necessary; in this case, the inconsistencies result in intensive efforts to update the safety concepts impacted by the changes, and, consequently, significantly decrease the efficiency and efficacy of safety assurance. To overcome this challenge, we propose a model-based formalization approach for specifying safety concepts that allows creating precise traces to architectural elements while specifying safety concepts using natural language. We observed that our approach minimize the inconsistencies between safety models and architecture models, and offers basis to perform automated completeness and consistency checks.

W15 Reliability of Passive Systems I

1:30 PM Wai'anae

Chair: James Knudsen, Idaho National Laboratory

379

Uncertainty Evaluation in Multi-State Physics Based Aging Assessment of Passive Components

Askin Guler, Tunc Aldemir, and Richard Denning

Nuclear Engineering Program The Ohio State University, Columbus, OH, USA

A methodology is presented to evaluate aging degradation of passive components under uncertainty. Stress corrosion cracking (SCC) degradation is selected as the example aging phenomenon and the methodology is implemented on the pressurizer surge line pipe weld of a pressurized water reactor. The degradation is described as a multi-state model consisting of six differential equations with system history dependent transition rates. The input data to the model include operating temperature, weld residual stress, stress intensity factor, thermal activation energy for crack initiation and crack growth. The associated uncertainties are represented by probability distributions derived from historical data, experimental data, expert elicitation, physics, or a combination of these. Latin Hypercube Sampling is used to generate observations from the distributions governing these parameters with a two-step approach that distinguishes between aleatory and epistemic uncertainties. The degradation model is solved by a semi-Markov approach using the concept of sojourn time to account for system history dependence of transition rates. The results are compared to a single step sampling process. The results show highest sensitivity of damage to weld residual stress.

537

Passive System Evaluation by Using Integral Thermal-Hydraulic Test Facility in Passive NPP(nuclear power plant) PSA (probabilistic safety assessment) Process

Ruichang Zhao, Huajian Chang, Yang Xiang

State Nuclear Power Technology Research & Development Center, Beijing, China

Passive safety engineered systems are designed to take effect by physical phenomena or passive procedure during the scenario of imaginary accident of AP600/AP1000/CAP1400 passive safety type nuclear power plant. Generally, associative thermal-hydraulic experiments have been studied to support specific physical phenomenon research or evaluation model development in these scenarios. Data from T-H experiment is credibility and direct. However, for the size of safety engineered systems of NPP are very huge, it's almost impractical to simulate a whole physical process by original scale test facilities. Some reduction scaled test facilities have been applied in design verification or safety research. So, it's worthy to explore how to apply the research of scaled integral T-H experiment target at the specific physical process or phenomena in PSA procedure. Scaling analysis method is usually applied in the integral test facility design, construction, and data evaluation especially. Through the scaling analysis and evaluation of experiment data, the uncertainty of every test result can be achieved. The result, trend or uncertainty of specific parameters of physical phenomena or process can be explicated. If test facilities and experiments are implemented by scaling analysis approprite, the most important result of test can present the prototype one in some degree (some uncertainty level). By these, the test can present the target physical phenomena simulated. And the prototype passive system can be explored by experiment result in some uncertainty degree level. Containment Experiment via integral safety validation test facility (CERT) has been set up for design validation of passive containment cooling system(PCCS) of CAP1400 NPP. CERT can simulate LOCA or MSLB accident scenario by equal ratio power-volume. The figure of merit of CERT is the pressure inner containment. Different trends of figure of merit can be obtained by adjust the boundary or initial condition of experiment, such as total enthalpy of steam injection, flow rate/coverage of cooling water outer of containment, wind speed in the annulus(the structure between shield and steel containment), concentration of non-condensation gas(the helium which is used to simulate hydrogen during accident), and so on. Besides, it's also can be achieved which quantitatively defines the possibility of the figure of merit beyond the design criterion. According to the scaling analysis and the experiment results(by integral test facility) of corresponding important physical phenomena, the quantitative performance assessment of PCCS can be obtained. Afterwards, these evaluation can support II level PSA of passive safety NPP.

576

Probabilistic Assessment of Composite Plate Failure Behavior under Specific Mechanical Stresses

Somayeh Oftadeh, Mohammad Pourgol-Mohammad, and Mojtaba Yazdani

Sahand University of Technology, Tabriz, Iran

This research focuses on determination of composite materials reliability and probabilistic assessment of their failure models. The principal task is to determine the probability distribution function for the composite behaviour in order to explain scatter and size effect and to describe composite reliability. A model for the statistical failure of composite materials is presented. As the first step of reliability evaluation, it is essential to understand the candidate failures modes of composite materials and their influence on structural performance. Failure mode and effect analysis (FMEA) is conducted. Based on the FMEA results, failure of a lamina is the main cause of a composite laminate failure. By considering only the failure of lamina, reliability analysis is done by utilizing the Monte Carlo simulation. Also a process is proposed to evaluate the reliability of composite structures. A composite structure of [02/±45/90]4 graphite-fibre/epoxy-matrix is selected as the case study for the methodology presentation. These result analysis concludes that the Weibull distribution is fitted with enough confidence to represent composite behaviour. In addition to sample size which affects directly accuracy of evaluated reliability, the input variance magnitude is another factor that plays an important role in uncertainty of analysis and converging the results.

216

Development of Feedwater Line & Main Steam Line Break Initiating Event Frequencies for Ringhals Pressurized Water Reactors

Anders Olsson, Erik Persson Sunde (a), and Cilla Andersson (b)

a) Lloyd's Register Consulting, Stockholm, Sweden, b) Ringhals NPP, Väröbacka, Sweden

During the last years the LOCA initiating event frequencies in the PSA's for the three Ringhals PWR units has been updated using the piping reliability data provided in the R-Book. Since the data currently presented in the R-Book only covers ASME Code Class 1 and 2 it cannot be used for initiating event frequency update for ASME Code Class 3 and 4 (the intention is though that the R-Book shall also cover Code Class 3 and 4 in the future). In order to proceed with initiating pipe break frequency update for the Ringhals PWR units a project has been started with the purpose to develop updated initiating event frequencies for certain Feed Water Line Break and Main Steam Line Break scenarios. The updated initiating event frequencies shall account for the known piping damage and degradation mechanisms, applicable industry-wide and plant-specific service experience data, the plant specific piping layout and material specifications, as well as the plant-specific risk-informed in-service inspection (RI-ISI) program currently implemented for the Main Steam Line and Main Feed Water systems. The updated frequencies shall reflect state-of-the-art piping reliability models that explicitly address aleatory and epistemic uncertainties. Also, the data analysis that underlies this frequency calculation shall be consistent with the requirements of the ASME/ANS PRA Standard Capability Category II. The causes of pipe failure (e.g., loss of structural integrity) are attributed to damage or degradation mechanisms. Oftentimes a failure occurs to synergistic effects involving operating environment and loading conditions. In piping reliability analysis, two classes of failure are considered. The first class is so called "Event-Driven Failures". These failures are pipe stress driven and attributed to conditions involving combinations of equipment failures (other than piping itself; e.g., loose/failed pipe support, leaking valve) and unanticipated loading (e.g., hydraulic transient or operator error). Examples of event-based failures include various fatigue failures (high-cycle vibration fatigue, thermal fatigue). The second class is defined as "Failures Attributed to Environmental Degradation". Environmental degradation is defined by unique sets of conjoint requirements that include operating environment, material and loading conditions. These conjoint requirements differ extensively across different piping designs (material, diameter, wall thickness, method of construction/fabrications). Similarly, pipe flaw incubation time growth rates differ extensively across the different combinations of degradation susceptibility and operating environments. For the piping systems included in the scope (i.e. Main Steam and Main Feed Water systems), flow accelerated corrosion constitutes a potentially key degradation mechanism. The initiating event frequency calculation will be based on a methodology similar to the one used in previous applications of R-book. This means that service experience data together with a Bayesian analysis framework will be utilized to derive piping reliability parameters for input to PSA models and PSA model applications. The piping service experience data input to the pipe failure rate and rupture frequency calculations will be taken from the Lloyd's Register Consulting proprietary PIPExp database which includes detailed information on piping damage and degradation mechanisms in Code Class 1, 2 and 3 and non-Code piping systems. The paper will present the work that has been performed together with conclusions and insights achieved during the project.

W16 Uncertainty, Sensitivity, and Bayesian Methods II

1:30 PM Ewa

Chair: Mohammad Pourgol-Mohammad, Sahand University of Technology

158

Improvement of the Reliability and Robustness of Variance-Based Sensitivity Analysis of Final Repository Models by Application of Output Transformation

Dirk-Alexander Becker

Gesellschaft fuer Anlagen-und Reaktorsicherheit (GRS) mbH, Braunschweig, Germany

Long-time performance assessment models for final repositories for radioactive waste typi-cally produce heavily tailed output distributions that extend over several orders of magnitude and un-der specific circumstances can even include a significant number of exact zeros. A variance-based sensitivity analysis gives a strong overweight to the typically very few values that are far away from the expected value of the distribution, which can lead to a low robustness of the evaluation. Moreover, while a variation of the model output, even over orders of magnitude, is of little interest if it happens on a radiologically irrelevant level, a mere factor of 2 near the permissible dose limits can be very im-portant. Both types of problems can be mitigated by applying appropriate output transformations be-fore performing the sensitivity analysis. The effects of different transformations on the sensitivity analysis results for typical final repository model systems are demonstrated.

177

Bayesian Approach Implementation on Quick Access Recorder Data for Estimating Parameters and Model Validation

Javensius Sembiring, Lukas Höhndorf, and Florian Holzapfel

Institute of Flight System Dynamics TUM, München, Germany

This paper presents the implementation of Bayesian inference on Quick Access Recorder data for parameter estimation purpose. Posterior density is sampled by employing Markov Chain Monte Carlo method. The reason for employing the Bayesian inference, instead of classical method such as Maximum Likelihood is because the data used in this paper has more uncertainties than the data obtained from a flight testing. These uncertainties come from the facts that Quick Access Recorder data obtained from untailored flight maneuvers, variables are measured/recorded at low and different sampling rates, control inputs such as elevator, rudder, aileron are not optimized, and flight is performed based on daily operational activities (wind and turbulence might disturb the measured variables). Results show that this approach is capable of capturing the uncertainties in the data since the estimated parameters are presented in the distribution forms. The flight data used as a case study are obtained from Airbus 320 Quick Access Recorder device. Some parameters to be estimated in this study consist of thrust and the effect of spoiler and flap deflection on lift and drag coefficient during approach phase.

194

Comparative Assessment of Severe Accidents Risk in the Energy Sector: Uncertainty Estimation Using a Combination of Weighting Tree and Bayesian Hierarchical Models

M. Spada, P. Burgherr and S. Hirschberg

Laboratory for Energy Systems Analysis, Paul Scherrer Institute (PSI), Villigen PSI, Switzerland

This study analyzes the risk of severe fatal accidents within the full fossil energy chains causing five or more fatalities. The risk is quantified separately for OECD and non-OECD countries. In addition for the Coal chain, Chinese data are analyzed separately because it has been shown that data prior to 1994 were subject to strong underreporting. In order to assess the risk and its uncertainty, a Bayesian hierarchical model was applied. This allows yielding analytical functions for frequency and severity distributions. Furthermore, Bayesian data analysis inherently delivers a measure of a combination of epistemic and aleatory uncertainties, through the a priori distribution and likelihood function that compose the Bayes theorem. In this study, in order to reduce the epistemic uncertainty related to the subjective choice of the likelihood function, Bayesian Model Averaging (BMA) is applied. In BMA the final posterior distribution is a weighted combination of the posterior distributions assessed for different likelihood functions (models). The proposed approach provides a unified framework that comprehensively covers accident risks in energy chains, and allows calculating specific risk indicators, including their uncertainties, to be used in a holistic evaluation of energy technologies.

234

Investigation of Different Sampling and Sensitivity Analysis Methods Applied to a Complex Model for a Final Repository for Radioactive Waste

Sabine M. Spiessl, and Dirk-A. Becker

Gesellschaft fuer Anlagen-und Reaktorsicherheit (GRS) mbH, Braunschweig, Germany

The performance of different types of sensitivity analysis methods in combination with different sampling methods on the basis of a Performance Assessment model for a repository for Low and Intermediate Level radioactive Waste (LILW) in rock salt has been investigated. This paper provides an insight into the results obtained with the following methods for sensitivity analysis: (i) a graphical method (CSM plot), (ii) a rank regression based method (SRRC) and (iii) a simple first-order SI calculations scheme (EASI). These methods were combined with random and LpTau sampling. The most robust results were obtained using LpTau sampling. The results obtained with CSM and SRRC analysis are fairly comparable. The EASI results, however, assign the dominating role to a parameter that seemed to be of secondary importance according to the results of the two other methods before. In addition, in the early phase below 104 years, the EASI results seem to be of low robustness.

253

Importance Analysis for Uncertain Thermal-Hydraulics Transient Computations

Mohammad Pourgol-Mohammad (a), Seyed Mohsen Hoseyni (b)

a) Department of Mechanical Engineering, Sahand University of Technology, Tabriz, Iran, b) Department of Basic Sciences, East Tehran Branch, Islamic Azad University, Tehran, Iran

Results of the codes simulating transients and abnormal conditions in nuclear power plants are inevitably uncertain. In application to thermal-hydraulic calculations by thermal-hydraulics codes, uncertainty importance analysis can be used to quantitatively confirm the results of qualitative phenomena identification and ranking table (PIRT). Several methodologies have been developed to address uncertainty importance assessment. Existing uncertainty importance measures which are mainly devised for the PRA applications are not suitable for tedious calculations of the complex codes like RELAP. On the other hand, for the quantification of the degree of the contribution of each phenomenon to the total uncertainty of the output, a new uncertainty importance measure that needs affordable computational cost is very promising. A new uncertainty importance measure is introduced in this article to cope with the aforementioned deficiencies of the TH uncertainty importance analysis. Important parameters are identified qualitatively by the modified PIRT approach while their uncertainty importance is quantified by the proposed index. Application of the proposed methodology is demonstrated on LOFT-LB1 test facility.

W17 Integrated Deterministic and Probabilistic Safety Assessment II

1:30 PM Kona

Chair: Martina Kloos, Gesellschaft für Anlagen-und Reaktorsicherheit (GRS) mbH

304

Insights from an Integrated Deterministic Probabilistic Safety Analysis (IDPSA) of a Fire Scenario

M. Kloos, J. Peschke (a), B. Forell (b)

a) GRS mbH, Garching, Germany, b) GRS mbH, Cologne, Germany

For assessing the performance of fire fighting means with emphasis on human actions, an integrated deterministic probabilistic safety analysis (IDPSA) was performed. This analysis allows for a quite realistic modelling and simulation of the interaction of the fire dynamics with relevant stochastic influences which refer, in the presented application, to the timing and outcome of human actions as well as to the operability of technical systems. For the analysis, the MCDET (Monte Carlo Dynamic Event Tree) tool was combined with the FDS (Fire Dynamics Simulator) code. The combination provided a sample of dynamic event trees comprising many different time series of quantities of the fire evolution associated with corresponding conditional occurrence probabilities. These results were used to derive exemplary probabilistic fire safety assessments from criteria such as the temperatures of cable targets or the time periods with target temperatures exceeding critical values. The paper outlines the analysis steps and presents a selection of results which also includes a quantification of the influence of epistemic uncertainties. Insights and lessons learned from the analysis are discussed.

430

Uncertainty Propagation in Dynamic Event Trees -Initial Results for a Modified Tank Problem

Durga R. Karanki, Vinh N. Dang, and Michael T. MacMillan

Paul Scherrer Institute, Villigen PSI, Switzerland

The coupling of plant simulation models and stochastic models representing failure events in Dynamic Event Trees (DET) is a framework to model the dynamic interactions among physical processes, equipment failures, and operator responses. The benefits of the framework, as a number of applications show, include, for instance, the capability to account for the aleatory timing of equipment failures or operator actions on sequence outcomes and to consider the impact of the number of available trains (rather than having to identify the bounding cases). The integration of physical and stochastic models may additionally enhance the treatment of uncertainties. Probabilistic Safety Assessments as currently implemented, e.g. for Level 1, propagate the (epistemic) uncertainties in the probability distributions for the failure probabilities or frequencies; this approach does not consider propagate uncertainties in the physical model (parameters). The coupling of deterministic (physical) and probabilistic models in integrated simulations such as the DET allows both types of uncertainties to be considered. The starting point in this work is to consider wrapping an epistemic loop, in which the epistemic distributions are sampled, around the DET simulation. To examine the adequacy of this approach, and to allow different approaches and approximations (for uncertainty propagation) to be compared, a simple problem is proposed as a basis for comparisons. This paper presents initial results on uncertainty propagation in DETs, obtained for a tank problem that is derived from a similar one defined for control system failures and dynamic reliability. An operator response has been added to consider stochastic timing.

460

An Approach to Physics Based Surrogate Model Development for Application with IDPSA

Ignas Mickus, Kaspar Kööp, Marti Jeltsov (a), Yuri Vorobyev (b), Walter Villanueva, and Pavel Kudinov (a)

a) Royal Institute of Technology (KTH), Stockholm, Sweden, b) Moscow Power Engineering Institute, Moscow, Russia

Integrated Deterministic Probabilistic Safety Assessment (IDPSA) methodology is a powerful tool for identification of failure domains when both stochastic events and physical time dependent processes are important. Computational efficiency of deterministic models is one of the limiting factors for detailed exploration of the event space. Pool type designs of Generation IV heavy liquid metal cooled reactors introduce importance of capturing intricate 3D flow phenomena in safety analysis. Specifically mixing and stratification in 3D elements can affect efficiency of passive safety systems based on natural circulation. Conventional 1D System Thermal Hydraulics (STH) codes are incapable of predicting such complex 3D phenomena. Computational Fluid Dynamics (CFD) codes are too computationally expensive to be used for simulation of the whole reactor primary coolant system. One proposed solution is code coupling where all 1D components are simulated with STH and 3D components with CFD codes. However, modeling with coupled codes is still too time consuming to be used directly in IDPSA methodologies, which require thousands of simulations. The goal of this work is to develop a computationally efficient surrogate model (SM) which captures key physics of complex thermal hydraulic phenomena in the 3D elements and can be coupled with 1D STH codes instead of CFD. TALL-3D is a lead-bismuth eutectic thermal hydraulic loop which incorporates both 1D and 3D elements. Coupled STH-CFD simulations of TALL-3D typical transients (such as transition from forced to natural circulation) are used to calibrate the surrogate model parameters. Details of current implementation and limitations of the surrogate modeling are discussed in the paper in detail.

315

A Toolkit for Integrated Deterministic and Probabilistic Risk Assessment for Hydrogen Infrastructure

Katrina M. Groth (a), Andrei V. Tchouvelev (b,c)

a) Sandia National Laboratories, Albuquerque, NM, USA, b) AVT Research, Inc., Canada, c) International Association for Hydrogen Safety, HySafe

There has been increasing interest in using Quantitative Risk Assessment [QRA] to help improve the safety of hydrogen infrastructure and applications. Hydrogen infrastructure for transportation (e.g. fueling fuel cell vehicles) or stationary (e.g. back-up power) applications is a relatively new area for application of QRA vs. traditional industrial production and use, and as a result there are few tools designed to enable QRA for this emerging sector. There are few existing QRA tools containing models that have been developed and validated for use in small-scale hydrogen applications. However, in the past several years, there has been significant progress in developing and validating deterministic physical and engineering models for hydrogen dispersion, ignition, and flame behavior. In parallel, there has been progress in developing defensible probabilistic models for the occurrence of events such as hydrogen release and ignition. While models and data are available, using this information is difficult due to a lack of readily available tools for integrating deterministic and probabilistic components into a single analysis framework. This paper discusses the first steps in building an integrated toolkit for performing QRA on hydrogen transportation technologies and suggests directions for extending the toolkit.

W21 Human Reliability Analysis IV

3:50 PM Honolulu

Chair: Jeffrey Julius, Scientech

398

A Human Reliability Analysis Approach Based on the Concepts of Meta-Operation and Task Complexity

Yongping Qiu (a), Dan Pan, Zhizhong Li, and Peng Liu (b)

a) Shanghai Nuclear Engineering Research & Design Institute, Shanghai, China, b) Department of Industrial Engineering, Tsinghua University, Beijing, China

To avoid the difficulties in human error data collection or elicitation and make Human Reliability Analysis (HRA) more generic, a new HRA approach is proposed based on meta-operation identification and task complexity measurement. In this approach, a task is decomposed into metaoperations which is defined as the smallest identifiable operation or activity exhibited in the performance of a task. An eye-tracking experiment was conducted with a conclusion that the metaoperation description model of procedure tasks is reasonable. Then action complexity of these metaoperations is measured. Experiment results indicate that the complexity scores have significant correlation with and are able to explain high percentage of variations of task completion time and error rate. Thus it is possible to establish a generic model for human error probability (HEP) based on task complexity. We intend to provide HEP parametric models rather than HEP tables. For the improvement of task complexity quantification, a survey with 69 licensed NPP operators was conducted to identify complexity factors in main control rooms. Significant factors will be later included into the complexity quantification model. In the proposed method, most PSFs are treated as complexity factors and thus not used as modifiers as in many existing HRA methods.

543

Human Reliability Analysis for Digital Human-Machine Interfaces: A Wish List for Future Research

Ronald L. Boring

Idaho National Laboratory, Idaho Falls, Idaho, USA

This paper addresses the fact that existing human reliability analysis (HRA) methods do not provide guidance on digital human-machine interfaces (HMIs). Digital HMIs are becoming ubiquitous in nuclear power operations, whether through control room modernization or new-build control rooms. Legacy analog technologies like instrumentation and control (I&C) systems are costly to support, and vendors no longer develop or support analog technology, which is considered technologically obsolete. Yet, despite the inevitability of digital HMI, no current HRA method provides guidance on how to treat human reliability considerations for digital technologies.

495

Phoenix – A Model-Based Human Reliability Analysis Methodology: Qualitative Analysis Overview

Nsimah J. Ekanem and Ali Mosleh

Center for Risk and Reliability, University of Maryland, College Park, USA

Phoenix method is an attempt to address various issues in the field of human reliability analysis (HRA). It is built on a cognitive human response model, incorporates strong elements of current HRA good practices, leverages lessons learned from empirical studies, and takes advantage of the best features of existing and emerging HRA methods. The original framework of Phoenix was introduced in previous publications. This paper reports of the completed methodology, summarizing the steps and techniques of its qualitative analysis phase. The methodology introduces the “crew response tree” which provides a structure for capturing the context associated with human failure events (HFEs), including errors of omission and commission. It also uses a team-centered version of the Information, Decision and Action cognitive model and “macro cognitive” abstractions of crew behavior, as well as relevant findings from cognitive psychology literature and operating experience, to identify potential causes of failures and influencing factors during procedure-driven and knowledge-supported crew-plant interactions. The result is the set of identified HFEs and likely scenarios leading to each. The methodology itself is generic in the sense that it is compatible with various quantification methods, and can be applied across various environments including nuclear, oil and gas, aerospace, and aviation.

496

Phoenix – A Model-Based Human Reliability Analysis Methodology: Quantitative Analysis Procedure and Data Base

Nsimah J. Ekanem and Ali Mosleh

Center for Risk and Reliability, University of Maryland, College Park, USA

A separate paper in this conference provides the qualitative analysis overview of the Phoenix -A Model-Based Human Reliability Analysis (HRA) methodology. This paper discusses the quantitative analysis aspect which rides on the three layers of the qualitative analysis (crew response tree – CRT, human response model, performance influencing factors – PIFs), by first assigning values to the PIFs that are consistent with the qualitative information gathered by the HRA analyst in the process. Thereafter, it generates estimates of the human error probability (HEP) for the human failure events (HFEs). Crew failure modes (CFMs) cut-sets and the list of PIFs identified by the HRA analyst as being relevant to the CRT scenarios used to model the HFE, are the inputs to our quantitative analysis process. Our model for quantification is a Bayesian Belief Network (BBN) and it is used to model the context-specific effects of PIFs on CFMs and consequently on HFE(s) identified in the CRT. The HEP estimate can be obtained by quantifying the CFMs in the BBN model. As part of the quantitative analysis process, methodologies for PIF assessment and estimation of HEPs (including instances when a cause-based explicit treatment of dependencies among HFEs is considered) have been developed.

494

Next Generation Human Reliability Analysis – Addressing Future Needs Today for Digital Control Systems

Jeffrey A. Julius (a), Parviz Moieni (b), Jan Grobbelaar and Kaydee Kohlhepp (a)

a) Scientech, a Curtiss-Wright Flow Control Company, Tukwila, WA, U.S.A., b) Scientech, a Curtiss-Wright Flow Control Company, San Diego, CA, U.S.A

This paper addresses issues and insights related to applying current human reliability analysis (HRA) techniques to the probabilistic risk assessment of digital control systems. Digital control systems are being used in new, advanced nuclear power plants as well as being implemented in older plants as upgrades. The use of digital control systems has been accompanied by challenges in PRA modeling because of several, unique features related to these newer systems. Among these is the fact that current human reliability models and data were developed before the digital systems and thus may need modification in order to properly assess the risk of nuclear power plant operation and to determine the risk of PRA applications, including being able to assess the impact of upgrading to digital controls. This paper summarizes the EPRI HRA User Group activities as background information and summarizes the EPRI HRA User Group experience with the Halden benchmarking project, then suggests modifications to HRA methods and data in order to support assessments with digital controls.

W22 Nuclear Engineering I

3:30 PM Kahuku

Chair: Mazleha Maskin, Malaysia Nuclear Agency

251

Discussion of Developing HTGR Emergency Action Levels Applying Probabilistic Risk Assessment

LIU Tao, Tong Jiejuan

Institute of nuclear and new energy technology, Tsinghua University, Beijing, China and The key laboratory of advanced reactor engineering and safety, Ministry of education, Beijing, China

Emergency action level (EAL) is a pre-determined, site specific, observable threshold for a plant initiating condition that places the plant in a given emergency classification level. The original EAL scheme was developed in the post-Three Mile Island accident era and documented in NUREG-0654/FEMA-REP-1, “Criteria for Preparation and Evaluation of Radiological Emergency Response Plans and Preparedness in Support of Nuclear Power Plants”. After that a series of technical documents named as “Methodology for Development of Emergency Action Levels,” (NEI 99-01) give a detail description on developing EAL scheme for Power Water Reactor (PWR) and Boiling Water Reactor (BWR). The most recent outcome NEI 07-01 focus on the advanced passive light water reactors. However, neither of these documents are focus on the high temperature gas-cooled reactor (HTGR). High Temperature Gas-cooled Reactor (HTGR) has specific safety characteristics which are different from water-cooled reactor to some extent. Because of inherent design features of the HTGR, a significant reduction is achieved in the potential for an offsite radiological release. The tri-structural isotropic (TRISO)-coated fuel is particularly critical to the prevention of radiological releases besides other fission-product barriers. The accident transients occur over hours and days, not seconds. No fast-acting active safety systems are required to maintain the fuel within design limits and so on. These characteristics are significant to emergency planning and will affect the HTGR EAL development. It is not suitable for applying water-cooled reactors’ EAL to HTGR. The paper discusses the EAL differences between HTGR and water-cooled reactor. Probabilistic risk assessment technology is suggested to develop appropriate HTGR EALs.

288

Building Competence for Safety Assessment of Nuclear Installations: Applying IAEA's Safety Guide for the Development of a Level 1 Probabilistic Safety Assessment for the TRIGA Research Reactor in Malaysia

F.C. Brayon (a), M. Mazleha, P. Prak Tom (b), A.H.S Mohd Sarif (c), Z. Ramli (a), F. Zakaria (b), F. Mohamed (c), Abid Aslam (d), A. Lyubarskiy, I. Kuzmina, P.Hughes, A.Ulses (e)

a) Atomic Energy Licensing Board, Selangor, Malaysia, b) Malaysia Nuclear Agency, MOSTI, Selangor, Malaysia, c) Universiti Kebangsaan Malaysia, Selangor, Malaysia, d) Pakistan Nuclear Regulatory Authority,Pakistan, e) International Atomic Energy Agency, Vienna, Austria

In 2010, the International Atomic Energy Agency (IAEA) published its Safety Guide on Development and Application of Level 1 Probabilistic Safety Assessment for Nuclear Power Plants (i.e. SSG-3). Although it was aimed at covering state-of-the-art recommendations for the development of PSA of Nuclear Power Plants (NPP), the guidance was deemed to be applicable to other nuclear installations as well. In order to get insights regarding the applicability of SSG-3 for a research reactor, and at the same time to achieve the goal of building competence and capacity for PSA in Malaysia, in December 2012, the IAEA started an extra budgetary project entitled “Applying PSA to Existing Facility to Develop Transferable Skills in the Use of PSA to Evaluate NPP Safety.” This project has been funded by Norway. The facility selected for the PSA study was the TRIGA PUSPATI Research Reactor (1 MW) which has been in operation in Malaysia since 1984. All major PSA tasks have been performed for the PSA development in accordance with the recommendations provided in SSG-3 (e.g. initiating events analysis, systems analysis, component data, human reliability, etc). The design specifics of the research reactor under consideration have been addressed in the PSA model (e.g. four end states have been defined, detailed consideration of initiating events induced by human errors, several operational states, etc.). The paper provides an overview of the methodology applied and discussed specific features of PSA tasks for the research reactor. Preliminary results and insights obtained are presented. In addition, insights for guidance in developing a research reactor PSA are highlighted in the paper.

297

Development of State Categorization Model for Necessity of Feed and Bleed Operation and Application to OPR1000

Bo Gyung Kim (a), Ho Joon Yoon (b), Sang Ho Kim, and Hyun Gook Kang (a)

a) Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea, b) Department of Nuclear Engineering, Khalifa University of Science,Technology & Research, Abu Dhabi, UAE

Since nuclear power plant (NPP) has lots of functions and systems, operated procedure is much complicated and the chance of human error to operate the safety systems is quite high when an accident occurs. There are two approaches to cool down the reactor coolant system (RCS) after an accident in a NPP. One is heat removal by a secondary side and the other is heat removal by a feed-and-bleed (F&B) operation. The F&B operation provides residual heat removal when secondary system is not available. It is difficult to decide initiating the F&B operation because the radioactive coolant is released to the containment during F&B operation. A state categorization model to qualitatively analyze the necessity and effect of F&B operation was developed. Sequences of RCS conditions when heat removal by secondary side fails are identified to two events: non-LOCA and LOCA. The proposed model has five levels to inform the necessity and effect of F&B operation qualitatively and component failure state to inform the unavailability of F&B operation. Thermal hydraulic analysis is performed to ascertain the boundary of each level in OPR1000. The boundary of successful F&B operation of OPR1000 was identified.

163

Thermal-Hydraulic Analysis for Supporting PSA of SBLOCA in APR+

Sang Hee Kang, Ho Rim Moon, and Han Gon Kim

Korea Hydro & Nuclear Power Co., Ltd, Daejeon, Republic of Korea

The Advanced Power Reactor Plus (APR+), which is a Gen III+ reactor on the APR1400, is being developed in Korea. To enhance the safety of the APR+, a passive auxiliary feedwater system has been adopted for passive secondary cooling in the APR+. For estimating the safety of APR+ design, the probabilistic safety assessment is performed. This paper discusses the success criteria verified and decided for more realistic and accurate safety evaluation of APR+. The analysis is performed by the best estimate thermal-hydraulic code, RELAP5/MOD 3.3. The Sensitivity analysis was performed to decide the interaction of break sizes, number of high pressure safety injection pumps and the pilot operated safety and relief valves and the timing of operator’s action for feed and bleed procedure. This study shows that the plant can be cool down without core damage as the one of the high pressure safety injection pump is available and the one of the pilot operated safety and relief valve is open at least in 80 minutes after the pilot operated safety and relief valve is first opened during small break loss of coolant accident without loss of offsite power. Also the most of the cases given scenarios except 1.97inch break analyzed to need to the additional action for preventing the core damage during small break loss of coolant accident with loss of offsite power. The analysis results can be used for contribute more realistic and accurate performance of the probabilistic safety assessment of APR+.

W23 Risk and Hazard Analyses II

3:30 PM O'ahu

Chair: TBD

316

Towards the development of the Observability-in-Depth Safety Principle for the Nuclear Industry

Francesca M. Favarò, and Joseph H. Saleh

Georgia Institute of Technology, Atlanta, GA, USA

Defense-in-depth is a fundamental safety principle for the design and operation of nuclear power plants in the United States. Despite its general appeal, some authors have identified potential drawbacks in defense-in-depth in its potential for hazardous state concealment. To prevent this drawback from materializing, we propose in this work a novel safety strategy, namely “observabilityin-depth”. We characterize it as the set of provisions designed to enable real-time monitoring and identification of hazardous states and accident pathogens, and to conceive of a dynamic defense-indepth safety strategy in which defensive resources, safety barriers and others, are prioritized and allocated dynamically in response to emerging of risks. To better illustrate the role of observability-indepth in the nuclear industry, we examine in this work the exemplar case study of the Three Mile Island accident and several “event reports” from the U.S. Nuclear Regulatory Commission (NRC) database. The selected cases clarify some of the benefits of observability-in-depth, by contrasting outcomes in situations where this safety principle was violated with instances of proper implementation.

324

Research on Leakage and Fire Accidents of the Heating and Refrigerating Systems Charging with the Flammable Working Fluids

Zhao Yang, Xi Wu

School of Mechanical Engineering, Tianjin University, Tianjin, P.R. China

Working fluids have been considered as the vital factor in the aspects of environmental protection and energy conservation for all the heating and refrigerating systems. The mainly on using hydrochlorofluorocarbons and hydrofluorocarbon working fluids as well as their substitutes in the transitional period are not environmentally friendly, which possess the ozone depletion potential and larger global warming potential. Mounting evidences have been indicated that most of the considerable candidates of the next generation working fluids have the disadvantage of flammability. It is increasingly important to research the combustion and leakage characteristics of the flammable working fluids. This paper contributes to analyze their flammable properties and typical leakage and fire accidents concerning the heating and refrigerating systems world widely. Some technical reasons have been concluded and available strategies have been proposed. And also it focuses the leakage characteristics of the equipment from the view of their whole life periods, beginning manufactured then transported, stored, used, repaired and scrapped. The possibilities of ignition sources for different segments have also been discussed. In addition, the prevention measures have been suggested on the associated consideration of both the hominine and technological factors.

344

Radiotherapy Errors Analysis before Plan Delivery based on Probabilistic Safety Analysis Method

Wenyi Li, Xi Pei (1,3), Shanqi Chen, Jin Wang, Liqin Hu, Yican Wu (1,2,3), FDS Team

1) Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, China, 2) University of Science and Technology of China, Hefei, Anhui, China, 3) Engineering Technology ResearchCenter of Accurate Radiotherapy of Anhui Province, Hefei, Anhui, China

At present, Probabilistic Safety Assessment (PSA) is widely used in complicated system safety analysis. And PSA method has been used in safety analysis of some radiation apparatus, such as linear accelerator. In the study of radiotherapy safety, we found that the radiotherapy plan error was most important within the whole process. This study made an attempt on radiotherapy plan errors analysis by PSA method. Firstly, we analyzed the whole process of radiotherapy plan errors, listed the course according to the logic of radiotherapy planning system, and constructed a fault tree (FT) from top to bottom. Secondly, we got the reliability data of basic events from clinical experience based on a local radiotherapy center, and calculated the radiotherapy planning system fault tree by the integrated reliability and probabilistic safety assessment program RiskA developed by FDS Team. A case was supplied in the paper, in which we got the top event probability 1.0×10¬-3. Some systematic review indicated that total errors in radiotherapy was 1.5×10¬-3 。 The data was high maybe because of the FT model needed to be improved, or because of the staff’s intensity labour. The most important base event was Wrong patient, the second was Diagnose errors. This means most errors attributed to human mistake or inattention. This study fully proved that enhancements on the technology and sense of responsibility were the common concern to any dangerous work. However, the basic events, which were merely preliminary process decomposition of radiotherapy, need to be subdivided in further analysis since that the data used was obtained from clinical experience based on a special case.

442

Accident Analysis of a Transport System: The Case of the Bus Rapid Transit System in Mexico City

Jaime Santos-Reyes, Vladimir Avalos-Bravo, and Edith Rodriguez-Rojas

SARACS Research Group, SEPI-ESIME, IPN, Mexico City, Mexico

In recent years, Mexico City has been seriously affected by the lack of efficient transport services. This is due to the excessive population growth in the City, and the lack of an efficient urban transport system has been evident. A 'Bus Rapid Transit' (BRT) system, appeared as an alternative to the problem of transport in the City. However, the system is vulnerable to accidents; it is believed that since its implementation there have been a total of 415 related accidents. When accidents occur and in particular during the rush hours, it usually brings chaos in the City. For example, a collision between a motor car and a BRT unit occurred in June 29, 2013, at the intersection of X and Y avenues. As a result of the accident, there were fifteen injured, and disruption of the vehicular traffic and took several hours to bring the traffic to normal. The paper presents some preliminary results of the analysis of the accident. The approach has been the application of accident analysis techniques such as 'Barrier analysis' and 'Events & causal factors chart'. The paper gives an account of the ongoing research project.

40

Emergency Resource Allocation for Disaster Response: An Evolutionary Approach

Mohammed Muaafa, Ana Lisbeth Concha, and Jose Emmanuel Ramirez-Marquez

School of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ, USA

The efficient response to a disaster plays an important role in decreasing its impact on affected victims. In some cases, the high volume of potential casualties as well as the urgency of a fast response increase the complexity of the disaster response mission. Such cases have created a need for developing an effective and efficient disaster response strategy. This paper focuses on developing a multi-objective optimization model and an evolutionary algorithm as a first step to generate optimal emergency medical response strategies characterized by the selection of: (1) locations of temporary emergency units, (2) dispatching strategies of emergency vehicles to evacuate injured victims to the temporary emergency units, and (3) number of victims to evacuate to each unit. The objectives of the optimization model are to minimize response time and cost of the response strategy. The evolutionary algorithm is used to solve the model and find a set of Pareto optimal solutions where each solution represents a different emergency medical response strategy. This approach can help decision-makers to evaluate the trade-offs among different strategies. Three experiments are provided to discuss the model.

W24 Digital I&C and Software Reliability III

3:30 PM Waialua

Chair: Pavol Hlavac, RELKO Ltd.

377

Optimal Selection of Diversity Types for Safety-Critical Computer Systems

Vyacheslav Kharchenko (a,b), Tetyana Nikitina (a), and Sergiy Vilkomir (c)

a) National Aerospace University named after N.E. Zhukovsky "KhAI", Kharkiv, Ukraine, b) Centre for Safety Infrastructure-Oriented Research and Analysis, Kharkiv, Ukraine, c) East Carolina University,Greenville, NC, USA

An important task in the development of safety-critical computer systems is achieving high levels of reliability and safety. To protect such systems from common-cause failures that can lead to potentially dangerous outcomes, special methods are applied, including multi-version technologies operating at different levels and volumes of diversity. In this article, we solve the problem of finding an optimal design decision at minimum cost with the required diversity or maximum diversity level with assumed cost. The proposed multi-version model takes into consideration the dependencies among diversity types, diversity metrics, and costs. The model presents a decision for each version of a two-version system. The model can be used to make an optimal design decision with various types of diversity during software-based multi-version system development.

390

Development of Post-Accident Monitoring System for Severe Accidents

Chang-Hwoi Kim, Sup Hur, Kwang-Sub Son, and Tong-Il Jang

Korea Atomic Energy Research Institute, Daejeon, South Korea

To cope with a severe accident such as Fukushima Nuclear power plants, fully independent monitoring and control system separated (isolated) from the conventional instrumentation and control system is needed. Also, a remote control room which is movable and usable at a distant location is needed for safe plant control and monitoring in emergency. In this paper, we will suggest a new concept for remote mobile control room and hardened I&C equipment to cope with a severe accident in nuclear power plants.

405

Relko Experience with Reliability Analyses of Safety Digital I&C

Jana Macsadiova, Vladimir Sopira, Pavol Hlavac

RELKO Ltd., Bratislava, Slovak Republic

The using of digital technologies is increasing in the operation of NPPs. Not only the new plants, but also the current generation of plants uses them due to upgrades of existing analog systems. It is regulatory requirement that the reliability is justified on objective basis. The objective of the digital system risk research is to identify and develop methods, tools and guidance for modeling reliability of digitals systems. The scope of this research is focused mainly on HW failures and limited reviews of SW failures and SW reliability methods. Due to many unique attributes of digital systems, a number of modeling and data collection challenges exist and there is no consensus on how the reliability models should be developed. The paper presents the methodology and overview of reactor protection system reliability analysis for safety digital instrumentation and control (I&C) in NPP. The digital I&C (RPS and ESFAS) play an essential role in the safe operation of nuclear power plants. The RPS forms part of the safety system that, for the purpose of ensuring reactor safety, monitors and processes the important process variables in order to prevent unacceptable conditions and maintain the conditions of reactor within safe limit. The RPS is usually divided into two parts: RTS – Reactor Trip System and ESFAS – Engineered Safety Actuation System. The RTS is designed to cause automatic interruption or slow-down of fission reaction on the detection of a number of accident situations. This is achieved by switching off the power supplies of the control rod driving mechanisms. The ESFAS is designed to cause automatic activation of different safety systems, e.g., to shut down the reactor, to inject water into the primary and/or secondary circuit in emergency conditions and prevent the radioactive release outside confinement during LOCA or transient. The paper also presents the detailed description of analyzed I&C system and the software of the system TELEPERM XS, which provides the digitalization, processing and evaluation. RELKO is working in this field since 1996 and was involved in reliability analyses of safety digital I&C systems, for NPPs in Slovakia, Hungary, Sweden, Germany and Finland. The results of reliability analyses have shown us that properly designed safety I&C systems can be very reliable from the dangerous failure (failure on demand) point of view. The frequency of spurious actuation is also very low. The main concern of this paper is to present the methodology of digital reactor protection system reliability analysis used in RELKO Ltd.

455

Markov’s Model and Tool-Based Assessment of Safety-Critical I&C Systems: Gaps of the IEC 61508

Valentina Butenko (a), Vyacheslav Kharchenko (a,b), Oleg Odarushchenko (b), Peter Popov (c), Vladimir Sklyar (b) and Elena Odarushchenko (d)

a) National Aerospace University “KhAI”, Kharkiv, Ukraine, b) Research and Production Company “Radiy”, Kirovograd, Ukraine, c) Centre of Software Reliability, City University London, London, United Kingdom,d) Poltava National Technical University, Poltava, Ukraine

The accurate dependability and safety assessment of systems for critical applications is an important task in development and certification processes. It can be conducted through probabilistic model-based evaluation using the variety of tools and techniques (T&T). As each T&T is bounded by its application area the careful selection of the appropriate one is highly important. In this paper, we present the gap-analysis of well-known modeling approach – Markov modeling, mainly for T&T selection and application procedures, and how one of the leading safety standard IEC 61508 tracks those gaps. We discuss how main assessment risks can be eliminated or minimized using metric-based approach and present the safety assessment of typical NPP I&C system, the Reactor Trip System. The results analysis determines the feasibility of introducing new regulatory requirements for selection and application of T&T, which are used for MM-based assessment of safety.

458

Quantification of Reactor Protection System Software Reliability Based on Indirect and Direct Evidence

Ola Bäckström (b), Jan-Erik Holmberg (c), Mariana Jockenhoevel-Barttfeld (d), Markus Porthin (a), Andre Taurines (d)

a) VTT Technical Research Centre of Finland, Espoo, Finland, b) Lloyd Register Consulting, Stockholm, Sweden, c) Risk Pilot, Espoo, Finland, d) AREVA GmbH, Erlangen, Germany

This paper presents a method for the quantification of software failures in a reactor protection system in the context of probabilistic safety assessment (PSA) for a nuclear power plant. The emphasis of the method is on the quantification of the failure probability of an application software module, which can lead to the functional failure modes: failure to actuate on demand a specific instrumentation and control (I&C) function or spurious actuation of a specific I&C function. The quantification is based on two main metrics, complexity of the application software and the degree of verification and validation of the software. The relevance of common cause failures and an analysis of the impact of fatal and non-fatal failures on the system will be covered by the discussion. Collection of operational data and challenges to use it for software reliability quantification will also be discussed. The outlined quantification method offers a practical and justifiable approach to account for software failures that are usually ignored in current PSAs.

W25 Risk Informed Applications III

3:30 PM Wai'anae

Chair: Justin Taylor Pence, University of Illinois

569

A Methodology for Ranking of Diverse Nuclear Facilities As a Tool to Improve Nuclear Safety Supervision

Alexander Khamaza, Mikhail Lankin

Scientific and Engineering Centre for Nuclear and Radiation Safety, Moscow, Russia

Nuclear regulatory body can exercise supervision of a considerable number of different nuclear facilities: NPPs of different capacities, generations and types, research reactors, various fuel cycle facilities, radioactive sources etc. This article presents a methodology of ranking different nuclear facilities according to the potential hazard level they represent, for the scope of optimizing controlling and supervising practices of Regulatory Body. Four criteria are proposed to be used for ranking. The first criterion is the scale of a hypothetical accident in a situation of total inefficiency of safety barriers. Depending on whether maximum hypothetical accident leads to off-site consequences and what these consequences are, as well as depending on the A/D ratio, four categories have been identified, the third of which has been further divided into four subcategories. The second and the third criteria, used for ranking, are estimated values of probabilities that operational occurrences could take place at the facility and corresponding conditional probabilities that the said occurrences would not develop into an accident of a specific level of severity. And, finally, the fourth criterion is efficiency of defence in depth of the facility. This article sets forth an algorithm of assessment of the said efficiency. It also contains a nomenclature of threats to Defence in depth and algorithm of evaluation of Defence in Depth vulnerability of the facility in respect of each of the threats and to the mechanisms of their implementation. Based on the assessment results for defence in Depth efficiency, the facility under consideration is ranked as belonging to one of the four categories. According to the rules, described in the article, after the facility under consideration has been rated according to all of the four criteria (or only according to some of them if evaluation according to the others is not possible), it is being assigned a final resulting rating of potential hazard.

319

Application of Design Review to Probabilistic Risk Assessment in a Large Investment Project

Seppo Virtanen (a), Jussi-Pekka Penttinen (b), Mikko Kiiski, and Juuso Jokinen (c)

a) Tampere University of Technology, Finland, b) Ramentor Oy, Tampere, Finland, c) Pöyry Finland Oy, Vantaa, Finland

In this paper, we present a systematic and comprehensive Design Review (DR) process that is integrated in design and engineering stages of final disposal facilities of spent nuclear fuel. The review process consists of seventeen interconnected phases and in the paper the methodology of certain phases is described in more detail. The main tools in the design review process are probabilistic modeling, stochastic simulation schemes, and large computer-aided calculation. Based on experience of the design review process application at an early stage of the project design and development phase it becomes possible to identify the problem areas, which may reduce the system availability and safety, increase the system life-cycle costs, and delay the design or operation start-up time.

184

Risk-Informed Nuclear Safety Management Program Development in CGNPC

Zhong Shan

Suzhou Nuclear Power Research Institute

This paper presents the safety management system designed for CGNPC. It’s developed based on the idea adopted from NRC, the Reactor Oversight Process (ROP). A system independent from the classic one is planned to be built with which system, risk management in three level including Unit level, Site level and multi-site level is provided in risk-informed manner using performance indicators and risk significances evaluated from internal and licensee events. The reports are provided both monthly and quarterly to the risk management committee to support decision-making.

W26 Risk Informed Licensing and Regulation II

3:30 PM Ewa

Chair: Marie Pohida, United States Nuclear Regulatory Commission

223

Technical Challenges Associated with Shutdown Risk when Licensing Advanced Light Water Reactors

Marie Pohida, Jeffrey Mitman

United States Nuclear Regulatory Commission, Washington, DC, USA

The United States (U.S.) Title 10 Code of Federal Regulations (CFR), 10 CFR 52.47(a)(27), requires applicants seeking a design certification (DC) to submit a description of the design specific probabilistic risk assessment (PRA) and its results. A DC applicant’s final safety analysis report (FSAR) is expected to contain a qualitative description of PRA insights and uses, as well as some quantitative PRA results, such that the U.S. Nuclear Regulatory Commission (NRC) staff can perform the review and ensure risk insights were appropriately factored into the design. As referenced in the NRC Standard Review Plan (SRP) (NUREG-0800) Chapter 19 [1], the staff ensures the risk associated with the design compares favorably against the Commission’s goals [2] of less than 1×10-4 per year (/yr) for core damage frequency (CDF) and less than 1×10 -6/yr for large release frequency (LRF). The staff expects that this PRA covers all modes of operation, including shutdown modes. The NRC has reviewed or is in the process of reviewing shutdown risk for evolutionary reactors and advanced passive reactors. The NRC is also preparing to review shutdown risk for small modular reactors (i.e., as part of pre-application activities). At the time the PRA information is submitted to the NRC, detailed shutdown procedures and outage plans have not been developed. Additionally, a low power and shutdown (LPSD) PRA standard has not been formally issued for general use. Therefore, reviews of plant configurations during shutdown combined with the impact of temporary equipment, penetrations, and a potentially open containment; evaluations of new design features; LPSD PRA scope issues have presented several challenges that will be discussed in this paper.

284

Experiences gained from a Living PSA workshop held on the PSA Castle Meeting in April 2013 in Stockholm

Ralph Nyman, Per Hellström, Frida Olofsson

Swedish Radiation Safety Authority, Stockholm, Sweden

The PSA Castle meeting in April 2013 was organized by the Swedish Radiation Safety Authority (SSM) together with the Nordic PSA Group (NPSAG). Participants on the meeting represented licensees, regulators, consultants, research organizations, international organizations and universities. A workshop on the subject “Living PSA (LPSA)” was held the last meeting day. The workshop results show very clearly that the interpretation of the meaning of the “LPSA concept” and the daily practice to work with PSAs at different nuclear power plants in the Nordic countries, in Europe and in USA differs. Different strategies practiced in those countries using the properties associated with the LPSA, means that stakeholders have varying meanings and goals at maintaining and ensuring the quality of the PSAs and their applications. Established LPSA processes, instructions for maintaining the LPSA concept vary among stakeholders and the pros and cons with all these are unknown. The aim with this paper is to open up for a wider international discussion about the interpretation and understanding of the LPSA concept, to achieve a harmonized view on methods for ensuring quality of base PSAs, quality in mandatory PSA updates and intermediate updates, quality in mandatory and voluntary applications but especially in the area of resource efficient reviewing methods and quality assurance (QA) methods of PSAs and applications.

299

An Initiative towards Risk-Informing Nuclear Safety Regulation in Hungary

Attila Bareith (a) and Geza Macsuga (b)

a) NUBIKI Nuclear Safety Research Institute Ltd., Budapest, Hungary, b) Hungarian Atomic Energy Authority, Budapest, Hungary

In response to a request by the Hungarian Atomic Energy Authority (HAEA), PSA analysts of NUBIKI Nuclear Safety Research Institute developed a proposal for making advancement in using PSA information within a risk-informed regulatory decision-making framework and outlined a work plan to perform the tasks envisioned in the proposal. Key PSA application areas were identified with an overview of the associated analysis methods. Improvement was proposed in thirteen PSA application areas in total. Risk-informed safety management and risk-informed regulation were included in the proposal as an overall framework for all the other applications. It was suggested that HAEA ensure the implementation of all the PSA applications, characterized in the study, in nuclear safety regulation between 2013 and 2020. Further, it was found necessary to investigate in detail and evaluate what modifications would be necessary in safety regulation in order to underpin risk-informed safety management and risk-informed regulation. PSA applications were prioritized in support of scheduling the developmental tasks. Also, the role of risk-informed decision-making in different life cycle stages of a nuclear power plant was characterized. Finally, it was proposed to make some distinction between PSA applications to operating and newly built nuclear power plants, respectively.

387

Mapping the Risks of Swedish NPPs to Facilitate a Risk-Informed Regulation

Frida Olofsson, Ralph Nyman, Per Hellström

Swedish Radiation Safety Authority, Sweden

The Swedish Radiation Safety Authority (SSM) has initiated a work to map the risks of the Swedish NPPs based on PSA results. The risk map will characterize and describe the most dominating risks of each plant. The main objective of this work is to utilize the risk map as a base for a risk-informed regulation approach. To be risk-informed means that a graded approach can be applied where issues related with higher risks can be prioritized over issues with lower risk. This graded approach will be applied at for example incoming applications for plant changes and reported incidents to decide what issues that should be prioritized for further evaluation and/or reviews. The work with the risk map started late in 2013 and the first revision will be finalized during summer 2014. The risk map will be a living document in order to serve as a tool for integrated risk-informed decision making at SSM. The work with the risk map has provided challenges along the way. Many of the challenges are related to differences in performing and presenting the results of the PSAs for different NPPs. A research need to develop guidance for harmonized PSA result presentation to resolve some of the identified issues has been identified.

575

Proposed Initiative to Improve Nuclear Safety and Regulatory Efficiency

Antonios M. Zoulis, Fernando Ferrante

US Nuclear Regulatory Commission, Washington, DC, USA

In early 2013, the Commissioners at the US Nuclear Regulatory Commission (NRC) issued requirements for the NRC staff to pursue an exploratory effort on an initiative to enhance safety by applying probabilistic risk assessment to determine the risk significance of current and emerging reactor issues in an integrated manner and on a plant-specific basis. Recognizing that each operating nuclear power plant has unique contributors to risk, a licensee who performs such an assessment could use the insights gained to propose to the NRC a risk-prioritized plant modification schedule with respect to regulatory actions. Such prioritization, if approved, should both focus the licensee on the completion of the most important new safety measures first while also addressing the challenges in dealing with various concurrent new and existing regulatory positions, programs, and requirements. This paper discusses the initiative and explores how addressing plant-specific risk insights can reduce to overall plant-specific risk as well as the overall average risk.

W27 Benchmark Problem #1 - A Space Propulsion System

3:30 PM Kona

Chair: Curtis L. Smith, Idaho National Laboratory

200

Engineering Risk Assessment of Space Thruster Challenge Problem

Donovan L. Mathias (a), Christopher J. Mattenberger (b), and Susie Go (a)

a) NASA Ames Research Center, Moffett Field, CA, USA, b) Science and Technology Corp., Moffett Field, CA, USA

The Engineering Risk Assessment (ERA) team at NASA Ames Research Center utilizes dynamic models with linked physics-of-failure analyses to produce quantitative risk assessments of space exploration missions. This paper applies the ERA approach to the baseline and extended versions of the PSAM Space Thruster Challenge Problem, which investigates mission risk for a deep space ion propulsion system with time-varying thruster requirements and operations schedules. The dynamic mission is modeled using a combination of discrete and continuous-time reliability elements within the commercially available GoldSim software. Loss-of-mission (LOM) probability results are generated via Monte Carlo sampling performed by the integrated model. Model convergence studies are presented to illustrate the sensitivity of integrated LOM results to the number of Monte Carlo trials. A deterministic risk model was also built for the three baseline and extended missions using the Ames Reliability Tool (ART), and results are compared to the simulation results to evaluate the relative importance of mission dynamics. The ART model did a reasonable job of matching the simulation models for the baseline case, while a hybrid approach using offline dynamic models was required for the extended missions. This study highlighted that state-of-the-art techniques can adequately adapt to a range of dynamic problems.

376

Application of the Dynamic Flowgraph Methodology to the Space Propulsion System Benchmark Problem

Michael Yau, Scott Dixon, and Sergio Guarro

ASCA, Inc., Redondo Beach, USA

This paper discusses ASCA’s experience in applying the Dynamic Flowgraph Methodology (DFM) to a space propulsion system problem specified by the Idaho National Laboratory (INL). This problem serves as a benchmark for comparing and evaluating the capabilities of advanced Probabilistic Risk Assessment (PRA) tools that are suitable for the risk analysis of future space systems. Future space systems will likely be highly automated, with self-diagnosis and recovery capability. They will be also likely to have multiple configurations to respond to mission events and contingencies. As a result of these complex features, traditional integrated Event/Fault Tree analysis may not be best suited for accurately performing PRA in future space missions. DFM is a general-purpose dynamic Multi-Valued Logic (MVL) modeling and analytical methodology supported by the Dymonda software tool. This tool and methodology can represent complex, time-dependent systems and processes, with inductive and deductive analysis capabilities that permit the systematic identification and quantification of success and failure events of interest. This benchmark study expands on the experiences of applying DFM in past projects, to include modeling and analysis of the system demand/time-based characteristics and redundancies, as well as the phased mission features of the benchmark problem.

511

Analysis of the Space Propulsion System Problem Using RAVEN

Diego Mandelli, C. Smith, A. Alfonsi, C. Rabiti

Idaho National Laboratory, Idaho Falls (ID), USA

This paper presents a solution of the space propulsion problem using a PRA code currently under development at Idaho National Laboratory (INL). RAVEN (Risk Analysis and Virtual control ENviroment) is a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities. It is designed to derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures) and to perform both Monte-Carlo sampling of randomly distributed events and Event Tree based analysis. In order to facilitate the input/output handling, a Graphical User Interface and a post-processing data-mining module are available. RAVEN also allows to interface with several numerical codes such as RELAP-7, RELAP5-3D and ad-hoc system simulators. For the space propulsion system problem, an ad-hoc simulator has been developed in Python and then interfaced to RAVEN. Such a simulator fully models both the deterministic (e.g., system dynamics and interactions between system components) and the stochastic behaviors (e.g., failures of components/systems such as distribution lines and thrusters). Stochastic analysis is performed using random sampling based methodologies (i.e., Monte-Carlo). Such analysis is accomplished in order to determine both the reliability of the space propulsion system and to propagate the uncertainties associated with a specific set of parameters. As also indicated in the scope of the benchmark problem, the results generated by the stochastic analysis are used to generate risk-informed insights such as conditions under which different strategies can be followed.