It can’t happen here

The sentence It can’t happen here is the latest motto of the French government, to which they add because our nuclear plants are the safest in the world. My point is not here to discuss politics or nuclear engineering, but to focus on risk analysis.

I only did a few risk analyses, but it taught me at least one thing: it can’t happen is never a valid conclusion, because every risk analysis includes some hypotheses. These hypotheses usually seem very reasonable, but the probability that these hypotheses can be broken is never 0. This means that the actual conclusion is

It can’t happen with the hypotheses considered in the present plan; however, in the unlikely case where these hypotheses are wrong, our analysis cannot give any guarantees

It doesn’t sound as good as the simplistic It can’t happen, but it is the truth. The problem today with out leaders (and more generally without society) is that we love our plans, and this entials many bad behaviors:

  • Completeness assumption. When there is a plan, we have a tendency to assume that it covers all possible cases. We will also try to maintain this belief as long as possible, even after getting strong hints that our hypotheses are wrong.
  • Irresponsibility outside the plan. When something happens outside of the plan, leaders fail to take responsibility for it. Statements like It is an act of God, or It was impossible to predict are typical examples of such irresponsibility. However, we would expect the exact contrary from a good leadership: we need them most when things go out of the plan. However, improvisation is hard, and nobody wants to try that.
  • Reluctance to update the plan. The last point is that, even when the plan has been proven wrong, there is strong reluctance to update it. Engineers are not too bad here: nuclear engineers around the world will update their plans after the current disaster. Politicians are not as good: for instance, they failed to update the plan after the 2008 economic crisis, although it has proven that some of their hypotheses were wrong.

What can se do about this? We are not going to solve the world’s big problems, but we can try to make opinions shift in the right direction. First, by getting information; I have recently heard on the radio a very good interview with Patrick Lagadec (a risk management professor, who wrote a recent paper on these topics) who went much deeper than I did here and definitely helped me structure my thoughts. Let’s make sure that we share this information.

Then, we can apply it on our daily lives. This applies even more to the security community, where risk analyses are common. The point is that, beyond having a plan, we need to be prepared for action when facing a situation that is outside of the plan.

I can take one personal example, around Java Card. Java Card is perceived as a highly secure technology, and we have good plans in place to guarantee this, including Common Criteria evaluations, sharing of information regarding the latest attacks, etc. However, the plan doesn’t include any provision for the (unlikely) case that a Java Card weakness ends up exploited by attackers in a way that leads to major public exposure. Individual companies have plans for such situations, and they are likely to have crisis management procedures; however, the synchronization between the companies could be much harder to achieve. I will check on that in the coming months that the Java Card Forum can react safely in a crisis.

Finally, I would like to express my sympathy to people in Japan who read this, and my solidarity to all of those who are struggling through the currently unfolding crisis.

No Comments

Leave a Reply

Your email is never shared.Required fields are marked *