What could be more important than accurately predicting the future? It is naive when some people to turn their noses up at forecasters because they get things wrong now and again. After all, a life without forecasts is completely unthinkable. When we get married, we are making a forecast – and we do the same when we decide what to study, to buy a house or to invest in an equity fund. We may not be aware of it, but each of these decisions involves an assumption about future developments. That’s why I was so interested in this book, which is about how we can improve our ability to make more precise forecasts of the future.
The author, a renowned scholar in his field, does not catalog subjective experiences with predictions, nor does he provide simple opinions on the topic. Instead, he describes a research project, set up in 2006 on behalf of the Intelligence Advanced Research Projects Agency (IARPA) in an effort to improve the reliability of the US intelligence services’ forecasting techniques.
In 2010, some 2,800 volunteers were recruited for the project – ordinary people rather than expert forecasters. These volunteers, all with very different professional backgrounds, were regularly asked to analyze and make forecasts on the kinds of questions that intelligence services commonly face: Will the president of Tunisia escape into exile next month? Will an outbreak of bird flu in China in the next six months kill more than ten people? Will the doctors who autopsy the body of the Palestinian leader Arafat find traces of a poisonous substance? Will Serbia become an official candidate for accession to the EU on December 31, 2011?
Most of the questions involved short- to medium-term forecasts, reflecting the author’s opinion that only comparatively short-term predictions can be reasonably accurate. His research has shown that the human brain will never be able to look years ahead to successfully predict decisive personal or national events – no matter how much we work on our forecasting techniques. A more condensed timeframe also allowed the project to determine whether the forecasters’ predictions were correct or not. With long-term forecasts that extend over many years, such score-keeping would have been impossible.
The project recruited 2,800 volunteers and aimed to identify the “superforecasters” among them, i.e. those whose forecasts proved to be far more accurate than coincidence would allow over a number of years. In the first year of the project, there were 59 top performers. Roughly 30 percent of these top forecasters had dropped out of the top two percent after a year. But that also means that 70 percent of these superforecasters still ranked among the best of the best. The probability of this level of consistency being a result of chance alone is just 1 in 100 million, but the likelihood that this actually applies to our forecasters is far higher, 1 in 3 (at a correlation co-efficient of 0.65). Sure, luck plays a role, but not a decisive one. It is only natural that even the best performers will go through a bad patch of ordinary results at some point – just like athletes, whose form can dip from time to time.
The authors criticize the forecasts made by supposed economic and political experts that so frequently appear in the media. In their view, these predictions are often hopelessly vague, don’t give a timeline, and are not subsequently reviewed for their accuracy. Most of the forecasts that appear in the media are made by pundits whose confidence in their own predictions inversely correlates to their validity. In fact, the research data revealed that the more famous an expert was, the less accurate his forecasts were. This is largely because less confident forecasters and those who are less liberal in their use of words like “certain” and “impossible” are less less likely to be contacted by the media than those who make apodictic statements.
For a forecast to be judged at all, it must first be formulated clearly and unambiguously, and it is this clarity that is so often lacking. After all, the vaguer the prediction, the easier it is for a forecaster to claim that their forecast was good (just like a horoscope’s generalized formulations that are so easy to reinterpret in light of a variety of actual events).
A “forecast” that says that something or other “could” happen is worthless – in principle, absolutely anything “could” happen. It all boils down to probability. It is very rare that the chances of something happening are either 0 or 100 percent. The authors call on all forecasters to use numbers to estimate the probability of their predictions. The reason for this is simple – the three parameters “certain”, “impossible” and “maybe” do not allow for sufficient differentiation.
So what makes a good forecaster?
- A good forecaster is pragmatic and not governed by a specific ideology. The experiments presented in the book showed that a group of experts who organized their thinking around a big idea didn’t perform anywhere near as well as a group of pragmatists. Good forecasters are not wedded to a single idea or agenda.
- The results of the research project described above show that the best forecasters scored among the top 20% of the population in intelligence tests. Intelligence and knowledge clearly have a role to play, but only to a certain point. There is no rule that the reliability of forecasts increases directly in line with the intelligence and education of forecasters – at least not above a certain level of intelligence.
- As far as personality is concerned, the best forecasters score highly for the Big Five personality test’s trait of “Openness to Experience.” For good forecasters, beliefs are hypotheses to be tested, not treasures to be protected.
- Good forecasters start by calculating a base rate to estimate the probability of something. What does that mean? Here’s an example (not from the authors, but one of my own): If, before getting married, I want to assess the probability of my marriage ending in divorce, I don’t start by considering the personalities and life experiences of myself and my future wife (that would be to adopt an “interior perspective”). Instead, I start with the “exterior perspective” to arrive at the base rate for my probability estimate. The base rate stands at 39.3 percent, i.e. the percentage of German marriages over the last 25 years that ended in divorce. I then fine-tune my forecast by taking the divorce rate for people living in a major metropolis (which is likely to be higher than the national average). If marriages between Protestants and non-affiliated couples are more statistically more likely to end in divorce than marriages between Catholics, I revise my base rate downwards. It is only at this point that I begin to take account of specific characteristics, and correct the prior probability upwards or downwards, as necessary. If, for example, the probability of a marriage ending in divorce is higher for someone who has already been married and divorced on numerous occasions than for someone who has never been married before, I adjust the probability upwards if either I or my wife have been through a number of divorces, and so on.
- Good forecasters aren’t primarily interested in looking for confirmation of their predictions, they are most interested in arguments that can be raised against them. It is important to consider any situation from a different perspective.
- Good forecasters have an affinity for numbers and are good at thinking in numbers, although this doesn’t mean that they use complex mathematical formulas. Most good forecasts are the result of profound reflection and balanced judgments. But numbers are important because they require forecasters to think more carefully about their predictions. After all, it is numbers alone that make it possible to check the accuracy of any forecast.
- Good forecasters revise their forecasts more frequently (in response to new evidence and arguments), but, in doing so, they avoid the common pitfalls of overreacting or underreacting to the new information. As they don’t ignore new evidence, even when it undercuts their own previously held opinions, they avoid what psychologists call “confirmation bias.” Conversely, they also don’t fall into the trap of being overly influenced by new information – that would mean reducing the value of the original information upon which their initial forecast was based. But, when old and new information are carefully weighed against one another, we retain the value of both, and can combine them to produce a new forecast.
- Above all, becoming a good forecaster requires systematic practice. And “practice” doesn’t just mean churning out a large number of forecasts. Rather, effective learning requires clear feedback. A forecaster who always makes vague predictions will find it difficult to learn from his mistakes, because he doesn’t get the clear and prompt feedback he needs to know if he was right or wrong. Forecasters who defend their mistaken predictions (which is exactly what most bad forecasters do), rather than admitting their failings, deprive themselves of the opportunity to learn from their mistakes.
- Good forecasters tend be introspective and self-critical, and know the value of checking their thinking for the numerous cognitive and emotional biases we know so well from, for example, the theories and experiments of Behavioral Economics.
- Good forecasters break down what appear to be intractable “big” problems and questions into a number of smaller, tractable “subproblems.” It is usually more difficult to make an accurate forecast on a complex, more general and global question, and easier to deal with a smaller, more specific question. Thankfully, global questions can normally be broken down into a series of subproblems and easier subquestions.
This book is highly recommended – it encourages us to reflect on our own behavior and forecasts. The style is accessible and straightforward, which makes reading it – despite the complex subject matter – an absolute delight.