Mark Earls: To prepare for the next pandemic, we must be Time Lords

Thursday 4 February 2021

The findings of decision science – and past calamities like the Challenger Shuttle disaster – can help us prepare for multiple futures


Coincidence can be the mother of ideas – or so I have found. Last week marked the first anniversary of the World Health Organisation’s declaration of a “public health emergency of international concern,” announcing the proper arrival of Covid-19 in our lives, and foreshadowing the fatalities and lockdowns to come. 

As it happens, this coincided with the week of the 35th anniversary of the Challenger Shuttle disaster – when, in 1986, the cold January morning sky above Cape Canaveral was torn by an unbelievable fireball, as leaks from the launch rocket’s liquid fuel tanks mixed and interacted catastrophically.

We’ve heard a lot over the last 12 months from governments and politicians about the “unprecedented” nature of the pandemic and the “unprecedented” measures needed to address it: a choice of words that encourages us to see this momentous collective experience as a one-off event, a unique conjunction of the stars, a once-in-our history eruption. 

Which it patently isn’t: plagues and pandemics have been a constant in human history, from the Black Death to typhoid and cholera in the 19th Century (the Soho pub The John Snow commemorates the work of the eponymous physician who identified the cause of the 1854 outbreak of the latter disease in the contaminated water pump located outside), the “Spanish” Flu and HIV/AIDS, Ebola and so on. Covid-19 is many things, but “unprecedented” is not among them. 

Why, then, were we so blindsided by this major disruption in our lives? Why were so many governments and so many commentators unable to see this coming?

Clearly, until some of the dust has settled, and we have been through something akin to the Rogers Commission set up by Ronald Reagan’s White House to understand what led to that other out-of-the blue disaster in 1986, we will not know the full details of the coronavirus story. 

But viewing the pandemic through the lens of the Challenger catastrophe is enlightening: in both cases, the government of the day was repeatedly warned by scientists that something like this was likely to happen sooner rather than later. In both cases, collective deafness or “groupthink” had calamitous consequences.

The Rogers Commission Report is clear and unsparing: the fireball was always likely to happen. There was a fundamental design weakness – the ‘O’ rings, which sealed a joint in the right lower booster, were not intended for use in ambient temperatures below 10C. The ambient air temperature at the launch site on the day was 2C. The O-rings failed and allowed the hot gases to mix and a fire line to develop catastrophically.

The engineers from suppliers Thiokol were certainly aware of the danger of this happening prior to the January flight. The day before, on extended conference calls with Nasa, Thiokol engineers and managers recommended a delay – but were ignored. Engineer Robert Ebling, author of a 1985 memo headed “HELP” that was intended to draw his colleagues’ attention to the problem again, told his wife the night before launch that the Challenger would explode the next morning. 

Nasa managers also knew that the worst was quite likely to happen: during the early design and development of the shuttle, government engineers at the Marshall Space Flight Center had warned of the weakness of this kind of joint in a solid fuel rocket and recommended a different design solution. Yet they were overruled, not least because the efficiencies of the design depended on assembly of parts brought to the launch site from a different location.

What’s more, from the very first flight of the Space Shuttle in 1981, the majority of Nasa’s post-flight engineering reviews revealed significant erosion in these O-rings, and sometimes the kind of gas leak and “blow-by” that ended Challenger’s flight so early. 

So worried was the space agency that, in 1985, the joint was redesigned – some extra steel was added to strengthen it. But, crucially, the O-rings were retained, albeit now registered as a “Criticality 1” component (a major identified risk). In other words: the safety strategy ended, amazingly, at the mere categorisation of the specific peril rather than a plan to deal with it.

A similar behavioural pattern may be observed in the UK’s response to the threat of Covid. For many years, a “SARS++” pandemic had been on UK government and business “risk registers” (a risk register forces an organisation to identify significant risk factors and draw up plans to “de-risk” them). 

The danger was much-discussed at the highest levels of public discourse, and many senior figures had spoken about the urgency of pre-emptive mitigating action – Bill Gates being perhaps the most prominent such prophet in the 2010s, warning time and again of the need to act without delay to anticipate a respiratory viral pandemic.

So serious was the danger seen to be by public health experts that, in 2016, the UK government ran an emergency planning exercise (Project Cygnus) which identified both how unprepared the UK was for such an eventuality and precisely what to do to address the shortfall – to anticipate and prepare.

Here we crash against a peculiar irony: as a result of Cygnus, the UK was literally teaching the rest of the world how to prepare for and mitigate against the impact of viral epidemics. Yet we did far too little to enhance the UK’s readiness on the dimensions identified by Cygnus: lack of surge capacity in terms of beds, staff and equipment; PPE availability; and social care preparation. Indeed, the report was viewed as too “terrifying” to be published and the results withheld by the NHS and the government. 

Even in the early days of the virus’s outbreak, the modelling of many epidemiologists, in the WHO and in certain UK universities, warned quite clearly of huge numbers of fatalities – warnings that were repeatedly ignored by the authorities. 

This was partly because that particular boy had called wolf already: previous warnings about diseases like Swine flu (2009) and Ebola (2003-6) turned out to overestimate the level of fatalities (fortunately). Undoubtedly, this made it easier to ignore these new predictions of disaster concerning Covid – analogous to the warnings about Challenger’s O-rings prior to that other supposedly “unprecedented” accident.

Decision science – the growing interdisciplinary field that brings together data analysis, behavioural science, AI, anthropology and cognitive psychology to examine decision-making – has much to offer as we seek to explain both disasters and to work out what should be done differently in future. 

The Rogers Commission, for instance, suggested that “group-think” – the tendency in certain circumstances for group norms and the desire for belonging to overtake individual judgement – was partly to blame for the decision to go ahead with that ill-fated January launch. Knowledge of a quite specific kind was subordinated to the collective will to press on with the project.

Another, related factor is that information suffers from entropy as it spreads – the meaning and the significance of uncomfortable news decays as it goes up and around an organisation, and supposed “facts” evolve and blur subtly as they encounter scepticism and hostility.

This phenomenon was clearly in play in the UK in late 2019 and early 2020 – together with what psychologists call “confirmation bias” and “motivated thinking”, too (both ways in which we seek evidence to support existing beliefs or views of the world, rather than sift evidence more objectively). 

So reclassifying the O-ring’s risk status seems not to have stopped the decision to launch Challenger. In the same way, the overhaul of the epidemiological early warning system that will be recommended in due course could have been accomplished before Covid. But it wasn’t. We did not prepare as we should have. Habits of thinking and behaviour – the confidence of conformity – eclipsed rational decision-making. 

What Rogers didn’t cover is something more fundamental: the abiding weakness in our thinking about probability and the future – a weakness that affects us as individuals and collectively in the organisations where we collaborate. Nobel laureate Daniel Kahneman and Amos Tversky’s breakthrough work in Behavioural Economics has demonstrated how doctors – you’d hope a particularly numerate bunch – are just as prone as their patients to misconstrue and miscalculate probabilities. 

We have “lazy” minds, Kahneman and Tversjy concluded; rarely will we actually “do the math”, jumping instead to conclusions based on rules of thumb like familiarity (is this something I’ve seen before?), salience or mental availability (what’s the first thing that comes to mind?) and, of course, habit (what do I usually do here?). We also discount the future for short-term gain – a bird in hand is, in some experiments, literally worth two in the bush. Dangers in the future are often less… dangerous. 

And we are all – apart from the clinically depressed – the victims of an essentially human bias towards optimism. One great study by the Behavioural Insights Team – the team of behavioural scientists created out of Gordon Brown’s Cabinet Office and championed by the Cameron government for their ability to create “nudges” that make policy more effective – reported that this is particularly true of decision-makers in the public sector, although my experience in the private sector tells me this is equally true there.

Please don’t get me wrong: this weakness in thinking about the future is a product of evolutionary adaptations that are otherwise incredibly useful to us as individuals and as a species. Much of the time they work well (optimism bias helps us get up everyday, and not to despair at the futility of a great many of our plans).

The problem is that the same inclinations, honed by millennia of evolution, can also incline us to downplay the impending disasters that are part of the fabric of modern society, or hinder us from thinking deeply and broadly about the complexities of the future.

How might we head off this in-built tendency? Is it possible to see the future and act on it promptly? Part of the answer lies in the (now largely forgotten) work of Swedish neuroscientist David Ingvar. Ingvar was interested in how some people – generally, the creative and entrepreneurial – were happier with novelty and more readily embraced new opportunities. 

Disappointingly for those who long for supernatural explanations, he didn’t find that these individuals had a “sixth sense” or any spooky tea-leaf reading skills. What Ingvar did identify that was different about them was the feeling of “familiarity” that many reported when confronted with the new. 

“Familiar” in what sense? It seemed that they had literally imagined the new thing already as a variation of something that already existed before. Not for them the usual head-scratchy “is this a thing?” feeling most of us experience as we try to get to grips with a novelty in front of us. 

We need to imitate this instinct and – crucially – to do so multiple times. Above all, we have to resist the default longing to predict “the future” (n. singular). One of the burdens our poor human minds bring to thinking about time is the tyranny of the single storyline. 

We worry: is this the future? Or, this the future? Or, this? What if I’m wrong? What if the other one is the future? Far better to accept and actively manage many potential stories – multiple lines leading out from now into the future. Why bet on only one horse? In policy terms, this means preparing actively for a number of different possible situations, not just one or two (as now, but +/- 10%, to put it crudely): to identify ahead of time the impacts of many different possible futures, rather than wait to scramble to do so in real time, in the heat of the moment. 

In healthcare, as in other areas, this will mean creating excess capacity and just-in-case physical resources and capabilities. It also will mean identifying weak spots that pop up across many different futures: governments of all flavours have known that the social care system is inadequate but the opportunity cost of fixing it has long seemed too large. 

True enough, some futures are more probable than others. It can be helpful if you apply a number – the statistics of chance – to the likelihood of each potential future story. But, as Challenger and Covid teach us, it’s far more important to be prepared for more eventualities than to finetune the probability of one; and, crucially, to work out what actions you’d need to take to be ready for any of the really scary ones. 

We need to be Time Lords. We need to create “Memories of the Future”, if you like: the crucial first steps that will help make us better prepared for what we might have to do with the serious stuff that is surely coming down the track.

Mark Earls is a writer and advisor to business.