IAPSAM Logo

PSAM 16 Conference Session Th21 Overview

Session Chair: Marcio Chagas Moura (marcio.cmoura@ufpe.br)

Paper 1 PH20
Lead Author: Philipp Mell     Co-author(s): Marco Arndt; marco.arndt@ima.uni-stuttgart.de Martin Dazer; martin.dazer@ima.uni-stuttgart.de
Non-orthogonality in DoE: practical relevance of the theoretical concept in terms of regression quality and test plan efficiency
Whenever scientific problems aren’t well understood in their physical properties or cannot be solved analytically, the approach of statistical design of experiments (DoE) is the only alternative. Yet, many DoE approaches are mathematically derived and underly assumptions and restrictions which might be hard or even impossible to be met in practice. Therefore, numerous research gaps regarding the practical implementation of DoE test plans remain. One typical requirement is that the experimental design has to be orthogonal. This condition demands that the investigated factors be set exactly to the given factor levels. This is usually not impossible, inevitably leading to deviations from the ideal condition. The literature therefore suggests a number of different metrics to measure the non-orthogonality of a test plan, which are presented and compared in this paper. The question arises, how crucial the impact of a particular deviation from the ideal orthogonal design is. This can be assessed by studying two central quantities of a DoE test plan: First, the power, which shows how likely an existing effect is to be identified. Second, the accuracy of the estimated model parameters resulting from the regression model developed from the test results. The scope of this paper is the assessment of these two quantities for typical deviations from a perfectly orthogonal full factorial test plan, allowing a transfer between theoretical requirements and practical use. In the long term, the results can be utilized to make test plans more efficient by suggesting which cost-reducing types of non-orthogonality still produce acceptable results. In order to achieve this goal, a parameter study for several exemplary systems is performed in a Monte Carlo simulation. For both the orthogonal and non-orthogonal test plans, linear regression and significance analysis is performed in each iteration. After calculating the changes in test power and regression accuracy, it is assessed how crucial the different types of non-orthogonality are. Also, the results are compared with different non-orthogonality measures, to see which of them serves best in predicting the practical value of a DoE test plan.
Paper PH20 | Download the paper file. | Download the presentation pdf file. Download the presentation PowerPoint file.
Name: Philipp Mell (philipp.mell@ima.uni-stuttgart.de)

Bio: Philipp Mell studied Automotive Engineering at the University of Stuttgart, Germany and received his academic degree Master of Science in 2020. He is working as a research assistant in the field of reliability engineering at the Institute of Machine Components. He is pursuing his PhD studies with a focus on life testing and statistical test planning. His further interests lie in the application of ML methods in reliability engineering.

Country: DEU
Company: Institute of Machine Components, University of Stuttgart
Job Title: Research assistant


Paper 2 LE81
Lead Author: Eugene Levner     Co-author(s): BORIS KRIHELI borisk@hit.ac.il
Achieving reliable low-cost detection of faulty parts in cyber-physical systems using unreliable detection sensors
Consider the problem of efficient and reliable detection of faulty parts in a large-scale educational cyber-physical system (ECPS). The ECPS is a network of several hundred computers, audio and video devices, and sensor/control devices that are located at homes and in university classrooms, interact with each other via the Internet, and serve for on-line or hybrid education. To locate the faulty parts, the ECPS uses a set of unreliable sensors that can test the system components one after another. For any possibly failed component, the following data is collected and used: (a) the cost and time of the component to be checked by a sensor; (b) the initial probability of component failure; (c) the probabilities of false negative and false positive (“false-alarms”) test results; and (d) the required safety level p0, which is defined as the probability of correctly detecting a faulty part; this parameter is set in advance by the decision maker and far exceeds the known reliability values of individual sensors. To achieve the required level of safety, we develop a new method that checks each ECPS component several times in succession. Using the formula for total probability and the Bayesian approach, we build a mathematical model for finding the minimum number of necessary retests required for each component. We then develop a fast test scheduling algorithm, investigate its complexity and conduct computational experiments to detect failures in a real educational CPS. Finally, we have compared the proposed method with several known failure-detection methods and obtained encouraging practical results.
Paper LE81 | Download the paper file. | Download the presentation PowerPoint file.
Name: Eugene Levner (levner@hit.ac.il)

Bio: Eugene (Evgeni) Levner is professor-emeritus at the Holon Institute of Technology, Holon, Israel. He is Professor of Operations Research awarded by Tel-Aviv University (1995) and Professor of Computer Science awarded by Holon Institute of Technology (2002). His research interests are in the design of exact, approximate and fuzzy algorithms in Artificial Intelligence Robotics, and Digital Medicine. He is the author of more than 120 books and articles, the organiser of numerous conferences, a member of the Editorial Boards of seven influential scientific journals (IEEE Transactions on Industrial Informatics, Algorithms, and others).

Country: ISR
Company: Holon Institute of Technology
Job Title: Professor Emeritus


Paper 3 YI337
Lead Author: Askin Guler     Co-author(s): Andrew Worrall worralla@ornl.gov George Flanagan flanagangf@ornl.gov
Development and Capabilities of the Molten Salt Reactors Reliability Database
Molten Salt Reactors Reliability Database (MOSARD) developed at Oak Ridge National Laboratory (ORNL) aims to help molten salt reactor (MSR) system designers, researchers and regulators to evaluate the reliability of the systems from the knowledge of the components reliability. The collaboration between ORNL, the Electric Power Research Institute (EPRI), Vanderbilt University (VU) is aimed at establishing the structure and contents of a component reliability database for MSRs. VU is tasked with collecting Molten-Salt Reactor Experiment (MSRE) component reliability data with the help of ORNL, and EPRI is providing insight to the project from its role as a postulated end user of the database. MOSARD is a timely input for licensing efforts of the advanced MSR designs including the risk-informed, performance-based regulations endorsed by Nuclear Regulatory Commission. MSRE valve reliability data collected from semi-annual reports and operational summaries have been used as an initial data set to create the MOSARD structure and to test the capabilities. The MSRE data builds understanding on causes of failures that lead to premature replacement of control and check valves of the off-gas, cover gas and component cooling systems and provides retrospective but unique lessons learned from the operational experience. The reliability parameters estimated by MOSARD are (1) probability of failure on demand, (2) failure rate during operation (used to calculate failure to run probability), and (3) time trends in reliability parameters.The MOSARD architecture comprises three tiers: from bottom to top the tiers are in order the database, web services, and the user interface. All access to the data will be through the web services, with no direct database connections allowed. Both JSON-RPC (JavaScript Object Notation Remote Procedure Call) and REST (Representational State Transfer) service interfaces will be supported, and requests must satisfy authentication and access controls. The top layer is the Web application providing a user interface for querying data and requesting interactive plots. This paper will summarize the demonstration results for a limited data set of the MSRE and presents capabilities for expanded data would be provided by concentrated salt loop or MSR designers test loop data.
Paper YI337 | |
Name: Askin Guler (yigitoglua@ornl.gov)

Bio: Dr. Askin Guler Yigitoglu is a Reactor Systems Modeling and Safety Analysis Staff Member of Advanced Nuclear Safety and Licensing Group under Reactor and Nuclear Systems Division at ORNL. She received her Ph.D. in Nuclear Engineering from the Ohio State University in 2016, where she developed a methodology to incorporate aging effects of passive components into probabilistic risk assessment. She earned her M.S. and B.S. in Nuclear Energy Engineering from Hacettepe University, Turkey. She is the recipient of the 2016 George Apostolakis Fellowship. Her main research interests are probabilistic risk assessment, dynamic reliability modeling of complex systems, Markov models and uncertainty quantification.

Country: USA
Company: Oak Ridge National Laboratory
Job Title: R&D Staff


Paper 4 CR227
Lead Author: Fernando Ferrante     Co-author(s): Ali Mosleh mosleh@g.ucla.edu Enrique Lopez Droguett eald@g.ucla.edu Justin Hiller jhiller@ameren.com Sergio Cofré-Martel scofre@umd.edu
A Bayesian Method for Estimating Potential Impact of Increase in STI on Component Failure Rates
Extending the time interval between inspections of surveillance test intervals (STIs) for risk-informed applications such as the surveillance frequency control program (SFCP) in the U.S. includes guidance on addressing the potential impact of a component’s failure rate due to unseen or and in-progress failure mechanisms. The STI extension methods described in the Nuclear Energy Institute (NEI) guidance for SFPC (NEI 04-10) involve conservatively modeling STI-modified components in a probabilistic risk assessment (PRA) model to assess potential risk impacts. While the guidance in NEI 04-10 provides details in terms of addressing the overall impact, it also includes a step to account for a periodic reassessment of the overall program impact. For this step, NEI 04-10 provides two options for how a periodic reassessment may be performed in terms of incorporating revised STIs into the base PRA model. The first option is to use the original conservative data assumptions that were utilized in performing the initial STI assessment, while the second option is to utilize data collection and statistical analysis to show that the reliability of the components affected by the STI change has not been impacted (or has improved) from the revised STI frequency value. Because of the scarcity of failure data for some components, NEI 04-10 clarifies that the latter option may be limited due to data collection issues (insufficient evidence). Hence, the lack of a statistical method for the second option could limit the implementation of this step under SFCP to conservative assumptions under the first option. To these ends, a technical basis was sought in the work presented here to establish a framework for developing a statistical methodology for periodic re-assessments under the SFCP that could form the basis for a practical approach to be utilized under NEI 04-10. Bayesian updating, past plant-specific test/inspection, operational records, and failure mode assessment are considered in a general framework for how a relevant technical basis can be derived for further use. Actual plant data from a U.S. nuclear power plant employing the SFCP was leveraged to support the development of a mathematical framework. It is expected that this framework can be applied under further piloting by considering practical aspects of its use with a PRA model currently supporting a SFCP, as well as broader industry data utilization to further calibrate its inputs. Ultimately, this effort represents an initial formal investigation into a basis for future practical use, in an area that was not previously explored with mathematical rigor.
Paper CR227 | Download the paper file. |
Name: Fernando Ferrante (fferrante@epri.com)

Bio: Fernando Ferrante is a Principal Project Manager at the Electric Power Research Institute (EPRI) in the Risk and Safety Management group (RSM). Ferrante joined EPRI in 2017 as a Principal Technical Leader in RSM. He was promoted to Principal Project Manager within RSM in March 2021, gaining responsibility for direct oversight of RSM staff involved in human reliability, fire risk assessment, external flooding PRA, along with RIDM framework activities. Dr. Ferrante held positions as a risk analyst at the U.S. Nuclear Regulatory Commission and senior engineer at the Defense Nuclear Facilities Safety Board. Dr. Ferrante holds a Bachelor of Science degree in Mechanical Engineering from University College London, in the United Kingdom, and a Doctor of Philosophy degree in Civil Engineering from Johns Hopkins University.

Country: ---
Company: Electric Power Research Institute
Job Title: Program Manager


A PSAM Profile is not yet available for the presenter.