Monday, August 20, 2018

Take on Fukushima

Better than most. My take on it in hindsight, basically the design was too obsolete. It was a first generation design, they should have built a new plant with current knowledge and technology. I am sure they would have built the plants at a higher level. I like the idea of generally the staff at these plants being too isolated from our society and group think takes over.   
Aug 20, 2018 03:09 PM IST | Business 

A must-read for political and business leaders, safety experts and consultants, risk analysts and all those whose job is to provide safe systems.

When disaster struck Fukushima area of Japan on March 11, 2011, it caused severe damage to the nearby Fukushima-Daiichi nuclear power plant, the worst nuclear disaster since Chernobyl in 1986. At that time, it was thought that the disaster was caused by a natural unpredictable event, in this case a tsunami.

However, later studies found that the disaster was the result of a “cascade of industrial, regulatory and engineering failures” leading to a situation where critical infrastructure failed. In this case, backup generators to keep cooling the plant in the event of main power loss were built in low-lying areas, susceptible for flooding during a tsunami, something which a proper hazard analysis would have identified.

With generators washed away and unable to cool itself, Fukushima Daiichi’s reactors melted down one by one. In addition, the company and regulatory authorities ignored the warning by scientists that higher tsunamis are a possibility.

This is just one example that disasters do not strike suddenly but are built brick by brick upon a series of overlooked shortcomings and incompetencies, as described in the book Meltdown by Chris Clearfield and Andras Tilcsik.

We live in VUCA times, Volatile, Uncertain, Complex and Ambiguous. “Different parts of a system unexpectedly interacted with one another,” the authors explain. “Small failures combined in unanticipated ways, and people didn’t understand what was happening.”

Failures stem from two variables. The first is complexity: the extent to which a system is linear and observable (like an assembly line) or interconnected and largely invisible, like a nuclear power plant. The second is coupling: the degree to which a system possesses “slack”— allowance for the time and flexibility to manage problems. Our relentless approach to increase complexity and wring out inefficiencies, the authors warn, moves us into a danger zone and set us up for calamity.

Groupthink, where participants read just their opinion to match the group is a common phenomenon leading to such disasters. We have seen it in India too, where the country’s largest pharmaceutical company went down when the employees, except a few voices, went along with the fraud perpetuated by the management.

The leadership’s attitude in encouraging diverse opinion to surface builds a culture where dialogue freely takes place. Homogeneity encourages groupthink and where complex decisions are required, it should be the responsibility of the management to create cross-discipline and cross-functional groups which can discuss complex matters in an environment which encourages different perspectives and viewpoints to be raised and pertinent matters are brought to the attention of the management.

Outsiders who can see issues from distance should be embedded in such decision groups. The book gives the example of Nasa’s Jet Propulsion Laboratory where relevant strangers are embedded in to avoid groupthink.
In complex systems, small incidents when ignored also lead to big disasters as the Three Mile nuclear disaster in the United States which is extensively discussed in the book. The operators in plants as well as nurses at hospitals and other systems must be sensitised not to ignore small divergences but must find a cause supported by facts and study.

The second is to provide a slack in the complex system such as a nuclear plant, that is to provide buffer in the system. This is no doubt, inefficient and frowned upon by efficiency experts but in the event of a failure, there should be sufficient time to shut off parts or full system.

Today, we are entering the age of industry 4.0, where sensors would provide information to computers in plants, automobiles and homes through IOT (Internet Of Things). Even mundane systems such as driverless cars are increasingly becoming complex and too much reliance on “fail-safe” systems can create conditions for more failures in future.

We have seen small glitches bringing down the reservation system of major airlines causing massive confusion, jams and losses. Small glitches in computer system led to massive buying of stocks by Knight Capital which ultimately led to its closure.

The book provides several solutions such as designing more transparent and loosely coupled systems; using structured decision tools; learning from near misses and other warning signs; encourage dialogue and dissent; build diverse teams and include outsiders; learning from outsiders; and preparing for and managing crises more effectively.

The authors argue that solutions are available but require cultural and behavioural changes in our organisations. The cases are derived from a variety of disasters from nuclear plants to social media campaigns. What links all these is the human element. Social sciences are increasingly uncovering behavioural causes of disasters and it is time these professionals are included in the decision-making process.Suhayl Abidi is a research advisor at the GOG-AMA Centre of International Trade, Ahmedabad.
Published Date: Aug 20th, 2018 04:00 PM | Updated Date: Aug 20, 2018 03:09 PM IST 

No comments: