By Harry Rosenthal, General Manager, Risk Management Services
One of the most challenging aspects a risk professional’s job is working through the problem of actual risk vs apparent risk. We recognise that all of our work is accomplished through the actions of people and the decisions they make. This is called the risk culture of an organisation. While, for efficiency purposes, we think of our employers as organisations, companies, institutions or entities, in reality they are collections of people, working together to ensure the organisation is successfully achieving a larger goal. While our universities or institutions may appear to be monolithic bodies on the letterhead, in reality, they are made up numerous individuals, who are each making decisions based on their own perceptions of risk and acting in ways which manifest their beliefs.
This bias is clear when we examine risk after it manifests, in root cause analysis. While many of us can easily identify a loss’s proximate cause, such as road conditions, severe weather, flooding etc., we often struggle to understand how decisions were made by individuals when faced by these conditions. Analysis often shows us that the impacts of these large loss events are magnified because of poor decisions made earlier and based upon someone’s perception of risk. For example, the Mutual has seen examples of repeated widespread flooding on campuses, due to heavy rains. These events often result in heavy financial losses and serious operational interruption. Unfortunately, we also see cases where institutions elect not to make upgrades to mitigate these losses, as it is the perception of individuals that the heavy rains are “one off” events. They are not worthy of investment in mitigation. In some cases, perhaps this perception of risk is indeed correct, and the work can safely be deferred, but in many cases we’ve seen, losses continue to occur, continues to damage property and continues to interrupt teaching and learning.
This decision not to mitigate, is not make by institutions, but by individuals or committees of individuals, who each harbour their own perception of risk. While many risk professions spend the greater part of their working day dealing with issues such as policy, procedure and compliance, our greatest impact, can be in the area of more efficient utilisation of the limited resources of the institution. This is achieved by better aligning perceived risk with actual risk.
While this may sound like a jumble of “management speak”, this gulf between actual and perceived risk can result in a lot of wasted resources, time and energy, because we are constantly looking in the wrong place. There are numerous examples in our personal lives. I recall, many years ago when I had young children, many of the conversations I had with other parents regarded risk. There was great concern in those days of children kidnapped by strangers, and as a result, mini-industries flourished to capitalize on those fears. The products they produced ranged from children leashes to child identi-kits. Books were written about stranger-danger, and public resources were marshalled to create safe houses for children, all focused on reducing the possibly of their abduction. Stranger abduction is truly a parent’s worst nightmare, and tragically does occur, however, it is actually remarkably infrequent. The vast majority of abductions are not by strangers, and this circumstance was far from a significant individual risk. The leading cause of non-natural death of thousands of children was actually poisoning from household chemicals. There was a strong perception that abductions by strangers was the main risk, and resources were poured in that direction, however, vastly more children’s lives would have been saved by simply installing child proof locks on cabinets where drain cleaner was stored.
This illustrates a problem because as risk professionals, we have too been slow to further develop the language of risk. For years we have relied on the traditional verbiage such as frequency, severity and time, to express risk, and we assume that everyone understands. All of this is based on the mathematics of probability, which is very understandable to a computer, but losses in translation when we deal with humans. Our job is to deal with humans. The computers are our tools, humans are our customers. However, if we look at the way we professionally speak about risk, it appears to be aimed at computers and not at humans. While it may be comforting to know we are minimising desktop crashes of Windows 10, this is coming at the price of failing to communicate with our customers and in playing an active role in guiding our organisations in using their limited resources very well.
For example, how many of us are comfortable that people in their organisation understand the concept of one in 100 year flood? Does it mean the university will only experience a flood once in 100 years, or that the university has a 1% chance of a flood each and every year? What different decisions take place when one definition is used over the other? While a computer might not care about either definition, to those who are responsible for ensuring the integrity of university infrastructure and continuity of operations, there could be a difference in the way they allocate resources depending on their perception.
One promising area of risk expression, at least on a personal level, is the micro-mort. Simply stated it is a unit of risk, which is expressed as a one-in-a-million chance of death. It was developed by Professor Ronald A, Howard of Sanford University, a part of a model of decision analysis. It is being popularised in Australia by Prof Hassan Valley of Latrobe University. Like most risk measures, it is based on past data and statistics, and are rough estimates of possible future activities, but it is a step in moving away from computer friendly risk communication, toward human friendly risk communication. We can all understand a micro-mort based risk discussion.
For example, it would almost be impossible to pick up a newspaper in the summer and not read about a shark attack, somewhere in the world. Humans are interested in shark fatalities and no news reporting system could resist a story about a shark. As a result, every person in Australia has an internal “shark may kill me” metric in their head, whether expressed or imagined, and it drives decisions like how close, how long and how deep, they go into the ocean. It also guides whether they engage sports such as surfing, scuba diving or ocean swimming. If you ask someone why they don’t do more ocean swimming, they may say, “Oh, I don’t have the time.” In reality they are thinking, “I don’t want to eaten alive while trying to stay healthy, it seems counter-productive.”
If you express the chance of being killed by a shark as a micro-mort, (i.e. chances in a million) shark attack currently stands at 8 micro-morts (or an eight in one million chance). This is compared to skydiving, which, according to Profession Valley, is between eight and nine micro-morts per jump (which means roughly one in 100,000 chance of dying). He further notes that running a marathon results in 7 micro-morts per run. SCUBA diving, another popular but risky activity, rates five micro-mort per dive. Of course the gold standard of accidental death is driving a car, and it would be interesting to note that you earn one micro-mort for every 400 kilometres of driving, unless you are on motorcycle, where you earn that single micro-mort every 10 km.
While we could debate these numbers all day, the significant fact is that we could debate these numbers all day. We now have an easy to understand language to express future risk, which is based on actual risk and not perceived risk. This language is very engaging to humans, and when using it, we suddenly understand actual risk better. We can talk about them, with passion and professional confidence. We should be looking for ways to make the risks we manage every day in our university lives, equally as engaging and human friendly. We should be looking for ways to express risks, less about incidents-per-100 years, and more based upon actual occurrences and expressed in ways specific to our institutions. For example, if we say to our universities that we are having a one-in-100 year flood, every 10 years, what are we really saying? Is this clear information for decision making and is this effective risk communication? No really and we can do better, and we must seek ways to better communicate risks to our stakeholders.
Finally, I recently sat next to a person on a plane who was apparently terrified of flying, believing the risk of dying was high, especially on this particular flight. While phobias are different to perceptions of risk, and perhaps it was an irrational phobia at work, but it did illustrate to me two things firstly, this person’s life was adversely affected. She was needlessly agitated and stressed by something which had a very, very low likelihood of occurring. This perception of risk was controlling her life, her options and her decision making. Secondly, it convinced me I need to buy bigger headphones.