Earlier in the year I ran a webinar to explore strategic planning after the onset of an unanticipated disruptive event, covid. The material content shared the benefits of doing such a plan even after the disaster had struck and laid out steps on how to develop such a plan. I suspect that a good number of businesses and/or leaders simply cobbled up a reaction plan to mitigate losses and then attempted to put together some types of a tactical plan to navigate the duration of this disruption by addressing new risks as they develop. Such a plan would fall into the category of Risk Based Decision Making. It is a plan that most likely highlights waiting for something to happen, making another containment/mitigation plan while hoping that the problem will go away soon. Hope is not a strategy. Risk based Decision Making is rooted in the visceral not the logical mindset. There is of course another side to this, the opportunistic side. So given that the return to normal, the new normal, is not even on the calendar yet, there is still plenty of time to lay out a strategic plan that includes not only sustainability through this prolonged disruption, but explores the potential opportunities that are coupled to it. What would appropriately fall into the category of a growth mindset rather than a fixed mindset.
Abstract Warranty analysis is offered in many software packages, and the examples in those packages are usually based on simple sets of data. By contrast, this talk presents a case study of how such an analysis was conducted for a more complex problem. Join the speaker on this journey, with its numerous twists and turns.
Bio Dr. Joseph Voelkel is Professor Emeritus (retired July 2020), School of Mathematical Sciences, Rochester Institute of Technology, Rochester, New York, where he had been Chair of the Graduate Statistics Program. In addition to teaching graduate students, he consults for a wide range of clients. His focus ranges from explaining fundamental statistical methods and mentoring Six-Sigma-project teams, to teaching advanced techniques and developing novel methods to solve complex client-specific problems. He is currently consulting and is also engaged in contract work for the Rochester Data Science Consortium at the University of Rochester.
DAGs are seen as a significant advancement in the realm of causal analysis by expanding the analysis boundaries and including essential influencers that are absent in other approaches. DAGs have proven their utility in application in mathematics, particularly graph theory, and computer science. Applying them to casual analysis enriches the practitioners ability to account for failure causes that historical record has revealed as being significant contributors. This overview will acquaint the attendee with DAGs and their potential contribution to the discipline of reliability.
Abstract: While there are many software reliability models, there are relatively few tools to automatically apply these models. Moreover, these tools are decades old and difficult or impossible to configure on modern operating systems, even with a virtual machine. To overcome this technology gap, we are developing an open source software reliability tool for the software and system engineering community. A key challenge posed by such a project is the stability of the underlying model fitting algorithms, which must ensure that the parameter estimates of a model are indeed those that best characterize the data. If such model fitting is not achieved, users who lack knowledge of the underlying mathematics may inadvertently use inaccurate predictions. This is potentially dangerous if the model underestimates important measures such as the number of faults remaining or overestimates the mean time to failure (MTTF). To improve the robustness of the model fitting process, we have developed expectation conditional maximization (ECM) algorithms to compute the maximum likelihood estimates of nonhomogeneous Poisson process (NHPP) software reliability models. This talk will present an implicit ECM algorithm, which eliminates computationally intensive integration from the update rules of the ECM algorithm, thereby achieving a speedup of between 200 and 400 times that of explicit ECM algorithms. The enhanced performance and stability of these algorithms will ultimately benefit the software and system engineering communities that use the open source software reliability tool. An overview of the Software Failure and Reliability Assessment Tool (SFRAT) will also be provided.
Autonomous vehicle (AV) navigation technology has been the prime focus of the most recent technology innovations. However, the industry’s advances on the issues of safety, risk, and reliability have been slow. Several accidents and near misses have already occurred, the mean distance driven to an unsafe condition, near miss or accident has been far shorter than the conventional road vehicles. The concerns over safety, software reliability, security, hacking/misuse, and licensing are mounting. Given the vacuum in systematic safety, risk, and reliability considerations in this rapidly evolving technology, the convergence of many related resources involving academia, autonomous vehicle industry, insurance, and associated government agencies would be necessary to identify and address the safety technologies, society/policy, and regulatory developments needed. This presentation will give an overview on AV systems prospectives from i. reliability from manufacturing, ii. society/ethics, iii. regulatory and compliance, and iv. readiness of safety/risk analytical and simulation tools/techniques. From reliability and manufacturing, the presentation will focus on i. duty cycle and design life implications resulting from the AV cars usage rate increase, ii) reliability enhancements via redundancy architectures, and iii) prognostic health management of AV’s critical systems. From regulatory and authority required accident reporting, state of California (CA) is the only state requiring AV accident/risk experience be publicly reported. Other states permit AV testing with no reporting requirements. Social response that will likely emerge should autonomous cars be introduced. Like other risky technologies, autonomous vehicles will contain embedded values. By their decisions, engineers, scientists, designers, regulators, and developers all make choices that implicitly or explicitly enhance or discount certain cultural and societal values. Thus, the technology will be inevitably be the subject of political discourse and debate. From safety/risk analysis tools, the questions arise like “what is ‘acceptable risk?” and, do AVs need to be “as safe as” or “safer than” traditional vehicles? To answer these questions, the specifications of the autonomous vehicles (AVs) is analyzed with respect to their safety, reliability and security (SRS). A challenge in risk analysis is to identify everything that can go wrong. How can we deal with the unknown unknowns? There are various assessment techniques currently in place. Many of the current methods can still play a part in supporting SRS of autonomous systems; however, many areas require new modelling techniques to be developed.
Dr. Mohammad Pourgol-Mohammad is a safety/reliability analyst in multidisciplinary systems analysis with Keurig Green Mountain and Associate Professor (adj) of Mechanical engineering at University of Maryland and was an Associate Professor of Reliability Engineering, with Sahand University of Technology (SUT). He received his Ph.D in Reliability Engineering from University of Maryland (UMD), and holds one M.Sc degree in Nuclear Engineering and another in Reliability Engineering from UMD. His undergraduate degree was in Electrical Engineering. Dr Pourgol-Mohammad has more than 18 year of work experience including research and teaching in safety applications and reliability engineering at various institutions including Johnson Controls, Sahand University of Technology, FM Global, Goodman Manufacturing, UMD, Massachusetts Institute of Technology (MIT), University of Zagreb-Croatia. He is a senior member of ASQ, ASME (currently ASME Safety Engineering and Risk/Reliability Analysis Division (SER2D) Chair), ANS and member of several technical committees and a registered Professional Engineer (PE) in Nuclear Engineering in States of Massachusetts. He is a certified reliability engineer (ASQ CRE), certified six sigma Black Belt (CSSBB) and Manager of Quality/Organization Excellence (ASQ CMQ/OE). He has authored more than 150 papers and reports on his researches and filed one US patent-pending. His efforts have been recognized with several awards.
Lean manufacturing has been around for quite some time and has morphed over the years into variants like LEAN SIGMA, LEAN SIX SIGMA, LEAN AGILE, LEAN Thinking, LEAN Safe, LEAN Control, etc. By definition, LEAN is simply an approach to remove waste from what would be identified as a Value Stream. This presentation will focus on the fact that Reliability is a necessary condition for LEAN and will overview the relationship between the two disciplines, sharing some of the tools that Reliability incorporates to ensure that LEAN is effective doesn’t lose sight of the overall mission effectiveness of the enterprise.. This will be an interactive session through polls to allow the attendees to share their perspectives.
BIO: David is a senior Reliability engineer with experience in multiple industries, including but not limited to medical device development, instrument development, high volume manufacturing, energy, aerospace, commercial vehicles and powertrain development.