What Is “Deep Uncertainty” And How It Relates To “Black Swans” And “Wicked Problems”?

by Christian Fjader

In the previous blog of this series I explored the differences between “risk” and “uncertainty” and argued that “risk” refers to the knowable and calculable, whilst “uncertainty” is unknowable and incalculable. This difference is significant, because it means the first is controllable through means of risk management, whilst the second is not. Moreover, one could argue that risk management is a conceptual framework that helps us to distinguish between the known, the unknown and the unknowable.

Consequently, it follows that;

  1. known risks can be identified and quantified before they materialize,
  2. unknown risks may be identifiable but not enough knowledge presently exists to quantify them meaningfully and,
  3. unknowable risks are beyond our capability to identify, not to mention quantify, in a meaningful manner in beforehand.

This third instalment of the blog series will focus on the concept of ”deep uncertainty” (a.k.a. “extreme uncertainty”) and explores the different ways it relates to “black swans” and “wicked problems”.

What then is “deep uncertainty”? In the most rudimentary level, there are only two levels of “uncertainty”: absolute certainty and total ignorance. However, the first is not attainable and the second clearly undesirable. Hence, it is more meaningful to examine what lies between these two extremes. In the academic literature there are different number of levels presented, varying depending on the field of science in question. Perhaps most commonly experts refer to five levels of uncertainty. The first three deal with lack of data and uncertainty related to the methods we use to model and measure a problem. Whilst a slightly more nuanced in practice, one can summarise that these levels of uncertainty can, at least to a degree, be addressed via methods of statistical analysis, or improving them, and gathering additional data or information about the problem. However, the two latter levels (4&5) constitute the levels of ”deep uncertainty”. In sum, these types of “uncertainty” refer to the level of uncertainty of the event itself, or to the level of uncertainty we have about our own knowledge, i.e. about the limits of our knowledge or capability to know. This is significant because a bulk of strategic- level decision-making involves making decisions about long-term issues with inherently uncertain outcomes. Hence, strategic decision-making problems cannot be addressed through assigning outcomes with probabilities or gathering more data, as the future is largely unknowable and unpredictable. In the context of strategic risk management, on the other hand, the significance of “deep uncertainty” relates to a situation in which catastrophic events cannot be addressed by assigning probabilities to its occurrence. We only know they would be catastrophic should they occur but cannot tell whether it would be reasonable to expect them to occur. These types of events are now commonly referred to as ”black swans”, events that are extremely rare (and thus, unknowable) but catastrophic in consequences (uncontrollable).

This pervasive and vicious problem in decision-making was famously highlighted by the U.S. Secretary of Defence Donald Rumsfeld, who stated in February 2002 in a Defence Department’s media briefing, when asked about evidence of weapons of mass destruction in Iraq, that: “There are known knowns. There are things we know that we know. There are known unknowns. That is to say, there are things that we now know we don’t know. But there are also unknown unknowns. There are things we do not know we don’t know”. Whilst seemingly cryptic, this statement underlines the propensity of being unprepared to completely unexpected catastrophic events that are beyond the parameters for what would be considered “normal”, simply because we are not aware of their existence or possibility to occur.

The second significant challenge of deep uncertainty in decision making is that when such events materialise, they tend to be novel and ambiguous. In other words, they are entirely new to us and even our collective experiences are unlikely to equip us with a ready solution. In other words, deep uncertainty also breeds ”wicked problems”. As stated in the first instalment of this blog series, “wicked problems” by definition are problems that have no clear parameters and, as such are not easily definable. Which is exactly why they are so hard to solve. If we can’t form an agreement on what the problem is and its parameters are, how could we reach an agreement on the solution?

What are the practical implications of such manifestations of “deep uncertainty” to strategic risk management and preparedness against such events? The bottom-line is that events belonging to the category of deep uncertainty exceed “reasonable” expectations, which in turn are limited by the extent of our knowledge of the system we are operating within or dealing with. For example, how reasonable would it be to invest € 1 000 000 000 000 in space defence systems to defend earth from asteroids? Albeit that really is a LOT of money, wouldn’t it be totally worth it if such systems one day would save the planet from total destruction and thus, total annihilation of the humankind? Wouldn’t any amount of money feel as a bargain in hindsight, should there be such an event? Or on a personal level – would it be worth investing your savings in Cryonics? Would it be worth it if you had an incurable cancer and there was a chance a cure could be found, but after your reasonably expected life cycle? How could you know whether science ever finds a cure, or when? How would you know whether they can actually revive you to administer that cure? Would they just give it to you, or would there either be some regulative or ethical hindrance, or would they change you a billion Euros for it? Hence, the question is would a reasonable person make a bet against deep uncertainty? How about a responsible policymaker?

In sum, there are things we cannot predict. Hence, it is reasonable to state that we are likely never going to be able to eliminate surprises. However, there are things that we could and should have predicted. Even worse things that we did predict, but yet did nothing………or at least not enough.