This is a follow on from my previous article: "The Problem with Risk Management".
The reason why we are so collectively poor at judgment and decision-making when it comes to complex risks is due to a variety of hardwired biases or heuristics. I hasten to add that these heuristics are not actually a bad thing. They can be incredibly valuable if for example, a car mounts the curb and careens towards you. You don’t want to pause to calculate trajectories and options. No, you want the simple rule that says “big thing coming at me fast is bad!” and the response that says “jump!”
The reason why we are so collectively poor at judgment and decision-making when it comes to complex risks is due to a variety of hardwired biases or heuristics. I hasten to add that these heuristics are not actually a bad thing. They can be incredibly valuable if for example, a car mounts the curb and careens towards you. You don’t want to pause to calculate trajectories and options. No, you want the simple rule that says “big thing coming at me fast is bad!” and the response that says “jump!”
One example of a bias that works against us with more complex risks is the ‘availability heuristic’ whereby people predict the frequency of an event based on how easily an example can be brought to mind. Essentially it operates on the notion that "if you can think of it, it must be important." That’s great when our ancestors regularly saw snakes in the wilderness. In the 21st century however, after weeks of watching the twin towers collapse and years of political rhetoric about the war on terror, people have been conditioned to fear terrorism and to see it as a much greater risk than it actually is.
The media play a big part in this. They don’t report deaths due to diabetes, heart disease or motor vehicles simply because they are so commonplace. Events like homicide and airline accidents are rare but spectacular so they get reported. It’s ironic but the rarer the event, the more we see it so the more common we believe it actually is. A classic instance of biased risk ratings is the fear and relative overestimation of the risk of flying compared to driving even though motor vehicle fatalities are much more common than plane-crash fatalities. Equally, studies show that people rate the chance of death by homicide higher than the chance of death by stomach cancer, even though death by stomach cancer is five times higher than death by homicide. Commonplace deaths such as medical errors and road accidents are not newsworthy and as a result don’t get a chance to trigger our availability heuristic even though medical errors are potentially the third leading cause of death in the United States.
We have a range of other equally impressive biases. ‘Optimism bias’ is the belief that we'll do better than most others engaged in the same activity and it goes some way to explaining why we think that car accidents are more likely to happen to ‘other people’. A classic example was a university study which asked students, which of 18 positive and 24 negative events (eg: getting a good job, developing a drug problem) were more likely to happen to them and to others. On average, students considered themselves 15% more likely than others to experience positive events, and 20% less likely than others to experience negative events.
We also have a ‘control bias’ where people are more likely to accept risks if they feel they have some control over them, driving being one common example. We are also especially attuned to risks involving people and small children with little regard to how likely they actually are. The ‘affect heuristic’ says that an overall good feeling toward a situation leads to a lower risk perception, and an overall bad feeling leads to a higher risk perception which helps people underestimate risks for actions that have some ancillary benefit (eg: smoking, skydiving).
In short, we are a bundle of biases and it’s amazing that we have any conscious understanding of risk at all. I’ll talk more about these biases and what to do about them in Section 7.16 Human Error and the Psychology of Risk but for the moment suffice to say that our limbic system operating at an unconscious level is what determines how we ‘feel’ about risks and most of the time, it drives our behaviours. Although reliance on affect and emotion is a quicker, easier and more efficient way to navigate in a complex, uncertain and sometimes dangerous world, there are many decision- making circumstances when there is no substitute for deliberation and analysis.
If we can collectively learn to however slightly, improve the way our two risk management brains interact, that slightest improvement in the re-allocation of funding is likely to vastly improve organizational performance, societal health, longevity and wellbeing. In order to do this, however, we need to understand the issues and have the right tools for the job. Hopefully this book will go some way towards helping you as a manager or a risk professional to bring amygdala and neocortex into alignment.
No comments:
Post a Comment