So why do risk institutes and associations not publishing research or guidance on the topic?
It would probably be fair to say that the academic institutes are in fact publishing plenty of papers on cognitive biases each year. The problem is they seem to be the main channel for such reading, and academic theory doesn't usually translate so well into any type of practice or guidance.
Material decisions in the foreground of uncertainty is a driver of risk, we know this to be true, but the source of such decisions can sit across the aleatory aspects of uncertainty at one end of the spectrum or the stakeholder's perception of uncertainty at the other. When it is our perception that leads to unwanted outcomes, there is this sense that culpability now has an owner.
Douglas Hubbard puts it well in his book How to Measure Anything.
"It is a feature of the observer, not necessarily the thing being observed. When I observe the average weight of salmon in a river by sampling them, I'm not changing their weight, but I am changing my uncertainty about their weights" | Douglas Hubbard
Herein lies the paradox of intangible entanglement between reality and our perception of that, which in the short of it is going to be difficult to assess. However, there are a few acceptable methods out there trying to do just this, including The Black-Litterman Model and Quantitative Risk Scoring systems such as Random Forests, Principle Component Analysis and even Logistic Regression.
All these tools help make the perceptive boundary narrower if applied correctly. Perhaps a good place to start is to encourage risk managers to detect and fight Cognitive Biases by using diagnostic tools similar to Receiver Operating Characteristic and you can read more about this technique here [LINK].
It reads like the approach you suggest helps to measure the effect cognitive biases may have on the observer but not necessarily prevent them or reduce their impact at the time of making a decision?
Alexei, Yes you are correct. If set up correctly, Receiver Operating Characteristic can be used to measure 'the truth' (if there is such a thing) within a decision that is based on the premise in the foreground of uncertainty. It works by comparing the inherent level of risk before there is a decision, with the outcome [ dare we say residual :-) ] after it.
Does it prevent cognitive biases?
No that is not possible because the stakeholder needs the freedom to act otherwise they aren't a stakeholder but a slave in the system of uncertainty. We just track how good their choices are as a basis on the differential of uncertainty inline with the outcome of those decisions.
Other techniques which are popular to help stakeholders reduce cognitive biases include tracking error reports [LINK] With all the methods I have listed here and in the blog posting, the stakeholder has to make a mistake once to learn from it. After that, we track their cognitive biases overtime as they select additional choices.