Start of Main Content

What Makes Good Forecasters

The Early Warning Project strives to use the best methods to assess risks of mass atrocities around the world, so we are happy to see a new paper (PDF) from the Good Judgment Project (GJP)—a multi-year, U.S. government-funded forecasting experiment that's set to conclude later this year—getting lots of press coverage this week. As Kelsey Atherton describes for Popular Science (here),

“For the experiment, participants competed in two nine-month-long forecasting tournaments. The questions for the tournament were selected by the Intelligence Advanced Research Projects Activity. Over the two years of the tournament, the forecasters were each asked a total of 199 questions, which “covered topics ranging from whether North Korea would test a nuclear device between January 9, 2012, and April 1, 2012, to whether Moody’s would downgrade the sovereign debt rating of Greece between October 3, 2011, and November 30, 2011.” Forecasters had to answer at least 25 of the questions. The vast majority of the questions had just two possible outcomes, like if a certain embattled world leader would remain in power after a given date. Other questions asked forecasters to choose one time-frame among multiple choices for a possible future event. Participants answered the questions online.”

The results from this tournament have surprised people accustomed to hearing that people, even experts, are generally poor forecasters. As the authors of the paper on GJP's findings report, "Overall, forecasters were significantly better than chance."

By design, the Early Warning Project’s Expert Opinion Pool uses many of the methods and strategies the paper discusses. For starters, the opinion pool approach—in which individuals are asked to estimate the probability of various events' occurrence and can update those forecasts as often as they like—is one of the methods GJP uses to elicit and combine forecasts from its participants.

GJP also found that forecasters with “political knowledge” tend to be more accurate than ones who don't closely follow the news. For us, this means recruiting participants with subject-matter expertise and regional knowledge to answer questions about things like which countries are most likely to see an onset of mass killing in the coming year and whether or not the government of Colombia and the FARC will strike a peace deal.

GJP also reports that forecasters working in teams were about 10-percent more accurate than forecasters working alone. Our opinion pool doesn't implement this idea exactly, but we do mimic some important features of it. In the study, team members were encouraged to work together and debate and share their forecasts with each other. In our opinion pool, participants can see and track the aggregate forecast and can discuss their forecasts and relevant news with other members of the pool in comment threads attached to each question.

The article points to further findings that are also relevant to our work. As Max Nisen summarized for The Atlantic (here), it turns out that

“Not all of the skills of successful forecasters are innate; many can be learned. The researchers found that being instructed to recognize and avoid bias and to use outside views had a huge impact.”

Based on these findings, we are looking to offer training in probabilistic reasoning and avoiding judgmental errors later this year. As our opinion pool grows, we also hope to experiment with explicit teaming to see if it boosts our pool's accuracy, too.

If you are interested in participating in our Expert Opinion Pool, we can be reached at ewp@ushmm.org.