Tag Archives: delphi method

The best way to get expert judgement is not to ask the expert?

In our personal and public lives, we are often faced with the possibility of events that are unlikely to happen but will have catastrophic consequences if they do; we have heart attacks, collapse of banks, earthquakes, Space Shuttle disasters, and invasive species precipitating ecological meltdown, just to name a few. We cannot ignore these events, and it is important to make the best estimates of their probabilities of occurrence in order to formulate judicious and ethical risk-management policies. This is a formidable challenge because the rarity of these events makes it impossible collect enough data within a reasonable time horizon to arrive at statistical estimates, especially if the potential consequences are grave enough to necessitate urgent preventive measures. Furthermore, empirical investigation may be altogether impossible unless we make happen the very catastrophe we are trying to avert.

When we are thus stuck between a rock and a hard place, expert judgement may have to serve as a substitute for quantitative data; this was the focus of two seminars given by Mark Burgman, Professor of Environmental Science at the University of Melbourne and Director of the Australian Centre of Excellence for Risk Analysis, at the Cambridge Conservation Seminar series and Cambridge Public Policy seminar series.

One of the first messages to hit home from Mark’s empirical research with human subjects is that there is no correlation whatsoever between how expert a given person is perceived to be by his or her peers in a given discipline and how good his or her actual judgement is of a given problem in that discipline. People assess the expertise of their peers based mainly on age, appearance and number of publications, but there is no correlation of one’s judgement with any of those criteria. Indeed, there is no known way to pre-segregate people who would deliver good and bad judgements of a given problem.

Mark’s studies also show that groups containing a mixture of experts and non-experts collectively make judgements with lower bias and higher precision than individuals, as long as the group is not too large. Equal weighting should be given to everyone’s contribution; although the perceived experts may have more experience and context-awareness, they also have a higher tendency to be overconfident. The empirical results show that their judgements are no less error-prone than those of the novices. Furthermore, it was found that no individual, expert or otherwise, performed consistently across different problems within the same discipline.

Such teamwork can be a powerful tool for practitioners in diverse disciplines, all dealing with problems embedded within complex dynamic systems from ecosystem conservation to business dynamics, where numerous factors interact with one another through a convoluted topology of causal feedback loops. One can only hope to arrive at the best possible solution when all stakeholders in the problem at hand partake in the discussion from beginning to end. It is because this allows the problem to be tackled from diverse and complementary perspectives and provides ample opportunity to reconcile conflicting objectives, in order to arrive at the best possible understanding of the problem and a consensus that everyone truly believes in. This, you would probably agree, is preferable to a situation where disputes cannot be resolved and have to be put to the vote. There are numerous voting systems but, as Mark highlights, none of them is completely satisfactory and there is no known perfect solution. True democracy is an illusion, although it doesn’t work too badly in practice.

Mark walked us through the basic nuts and bolts of group-based expert judgement using a very simple but highly illustrative example of estimating, or judging, the number of beans in a jar. Unlike a real-world rare extreme event, this setup was amenable to experimental manipulation. A judgement is first obtained from each individual in the group. He advocates four subsidiary questions for the individual:

1. What is your highest estimate of the number of beans?
2. What is your lowest estimate?
3. How confident are you that the actual number lies within those limits?
4. What is your best estimate of the number of beans?

This four-step process has been found to elicit the most accurate estimates from individuals—Mark welcomes ideas on how to do even better.

Next, one can improve the estimates by obtaining a consensus from multiple individuals. This is done by incorporating the innovative four-step process above into what is known as the Delphi Method. After each member of the group has independently given his or her answers to the four questions, the answers are collated and shown to the whole group. Each member is then asked to revise his or her personal answers given the collated figures, and the whole sequence may then be repeated again. This has been empirically shown to yield even more accurate estimates. Furthermore, the variation in inputs from multiple individuals with different cultural and educational backgrounds gives an idea of how robust and reliable the estimate might be. This is reminiscent, albeit tenuously, of a ‘sensitivity analysis’ of the robustness of mathematical model predictions in the face of parameter uncertainty. Despite the real-world importance of determining the degree of sensitivity of predictions to parameter uncertainty, sensitivity analysis is rarely done or required even by peer-reviewed ecology journals.

Conservation could benefit from group-based expert judgement, since ecological data are often scarce and incomplete and the nonlinear behaviour of complex ecosystems makes low-probability high-risk events inevitable. Mark’s take-home message is that expert judgement should be accorded the same reverence as scientific data, but it should be harnessed in a systematic way that minimizes bias and maximizes precision rather than being a haphazard question-and-answer session. The complementarity of quantitative data and expert judgement in turn feeds nicely, I think, into the system dynamics framework for simulating and solving complex nonlinear problems in many disciplines.

I am grateful to Mark Burgman for permission to communicate his research and hope it will be of value to my readers.

Related topics
Info-gap decision theory
How banking benefits invasion ecology
Horizon scanning in nature conservation

Advertisements

%d bloggers like this: