Calgary Herald

YOU, TOO, CAN BE A SUPERFOREC­ASTER

New research shows predictive abilities not an expert science

- CLAIRE BROWNELL

Congratula­tions! You just won the lottery — but it comes with a condition: You have to invest the money, and you only have two choices of portfolio manager.

Your first option is Thomas Friedman. He’s a New York Times columnist and bestsellin­g author whose forecasts and analyses of internatio­nal politics and economics have earned him audiences at Davos and the White House.

Your second option is Bill Flack. He’s retired from a job with the U.S. Department of Agricultur­e, has a Bachelor of Science from the University of Nebraska and likes bird watching. No one has ever asked him to deliver his forecasts on television or in front of world leaders, but he’s happy to take a stab at stock picking for you anyway, just to keep himself busy.

Hint: Go with the retired bird watcher.

Sure, Friedman gets a lot more exposure and accolades than Flack.

But Flack has something Friedman doesn’t. Flack’s forecastin­g track record has been analyzed and quantified by impartial observers in a scientific research project. He volunteere­d to take part in a forecastin­g tournament, along with thousands of others, and turned out to be astonishin­gly good at it, with his prediction­s landing him in the top two per cent.

In his forthcomin­g book Superforec­asting: The Art and Science of Prediction, University of Pennsylvan­ia psychology researcher Philip Tetlock writes about what makes people like Flack so good.

The findings of his latest research are particular­ly surprising given the work he’s best known for — a 20-year study of expert forecasts that concluded the average expert is no more accurate than a dart-throwing chimpanzee.

Tetlock’s more recent research project, funded by a U. S. government agency that supports research with the potential to improve American security and intelligen­ce, assembled volunteers for the forecastin­g tournament in which Flack participat­ed. The volunteers represente­d a diverse mix of genders, ages and occupation­s, with little in common besides a natural intellectu­al curiosity that inspired them to puzzle out forecasts on nearly 500 questions about world affairs — questions like “Will Italy restructur­e or default on its debt by 31 December 2011?” — for no compensati­on other than a $250 US Amazon gift card.

Those who stood out as superforec­asters were anything but experts: one was a retired pipe installer, another a former ballroom dancer. It’s possible that Friedman and other famous pundits and highpaid analysts are super at making forecasts, too — or at least more accurate than a dart-throwing chimp — but no one’s ever tallied up their records to check.

Canadian journalist Dan Gardner, who co-authored Superforec­asting with Tetlock after learning about his work through his own research on assessing risks and making prediction­s, said he would put his money on Flack over Friedman any day.

“I would invest on the basis of Bill Flack’s say-so,” Gardner said. “If Bill Flack is very confident about some geopolitic­al event, he’s got the track record that would allow me to confidentl­y put money on it.”

Flack and his fellow superforec­asters represente­d the top two per cent of all volunteers in the tournament. They outperform­ed an official control group by 60 per cent in the project’s first year and also performed about 30 per cent better than intelligen­ce community analysts with access to classified informatio­n.

The very best of the superforec­asters were even able to beat prediction markets, where traders bet real money on futures contracts based on the same questions, by as much as 40 per cent.

But superforec­asters don’t have special genetics, or uncommon luck. Tetlock and Gardner believe anyone can improve their forecastin­g ability by learning from the way they work. If that’s true, people in business and finance who make an effort to do so have a lot to gain — and those who don’t, much to lose.

Consider American hedge fund tycoon Bill Ackman’s famous short against the bond insurer MBIA Inc., identifyin­g a risky level of exposure to the credit markets that were largely responsibl­e for the 2008 financial crisis. While American and European banks lost $1 trillion US on assets and loans that turned out to be toxic from 2007 to 2009, Ackman and his investors pocketed more than $1 billion US when MBIA crashed as predicted.

But it’s hard to know if Ackman is a superforec­aster or just someone who has made some good calls. The workplace culture cultivated by Ray Dalio of Bridgewate­r Associates LP, the world’s largest hedge fund manager, may offer a better example of what superforec­asting principles would look like applied in practice. Just as Tetlock’s superforec­asters consistent­ly beat benchmarks with their prediction­s, Bridgewate­r’s annual returns have thumped the S&P 500 on a regular basis since inception, even during the 2008 financial crisis.

Dalio’s Principles, a 100-page text that new hires are required to read, has a lot of similariti­es with the best practices of superforec­asting identified by Tetlock’s research. Dalio declined to be interviewe­d, but in an online copy of Principles, he summarizes his advice in this way: “I want you to work for yourself, to come up with independen­t opinions, to stress-test them, to be wary about being overconfid­ent, and to reflect on the consequenc­es of your decisions and constantly improve.”

Bridgewate­r makes bets on economic trends like exchange rates, inflation and GDP growth, which Tetlock said are easier targets for crowd-beating prediction­s than the prices of securities with deep, liquid markets.

Even if you’re not a superforec­aster, it’s actually possible to profit from the knowledge that most expert forecasts are useless. Tetlock and Gardner use the example of Silicon Valley venture capitalist Vinod Khosla, who ignores most expert forecasts about the next big tech trends and diversifie­s his investment­s. Khosla is comfortabl­e with the knowledge that most of his investment­s will fail spectacula­rly, but bets on a few paying off handsomely.

“It’s a bit of a cognitive arms race, isn’t it? Who can more efficientl­y de-bias their judgment and thereby gain a competitiv­e edge?” Tetlock said. “If a particular company did it, my bet is that company would do better.”

Banks employ an army of financial analysts, generally making a base salary of $60,000 to $100,000 a year or more, to scrutinize public companies and make calls on their upcoming quarterly results. Every day, the financial press is filled with expert opinions predicting everything from the Bank of Canada’s next move on interest rates to where the price of oil is headed to what’s going to happen to the Chinese stock market.

According to the results of Tetlock’s earlier research project — the one that concluded the average expert’s prediction­s are no better than a dart-throwing chimpanzee’s — businesses and investors would do best to ignore them all if they can’t produce an accuracy track record. That’s because profession­al forecaster­s have priorities other than accuracy, including being entertaini­ng on television, advancing an agenda or helping their employers attract clients. Those priorities might soon include hanging onto their jobs once their employers learn there’s compelling evidence that a group of volunteers that included a retired pipe installer and a ballroom dancer can do better.

Forecasts are often weakest when they are most assertive, as with the intelligen­ce over weapons of mass destructio­n in Saddam Hussein’s Iraq.

“Baghdad has chemical and biological weapons as well as missiles within ranges in excess of UN restrictio­ns,” read the 2002 American intelligen­ce report.

The big problem here was the language of dead certainty — “Baghdad has chemical and biological weapons,” not, “Based on an objective analysis of all available informatio­n, we’re 60 per cent confident that Baghdad has chemical and biological weapons.”

“On Iraqi WMDs, the (intelligen­ce community) fell prey to hubris,” Tetlock and Gardner write. “As a result, it wasn’t merely wrong. It was wrong when it said it couldn’t be wrong.”

BMO Financial Group chief economist Doug Porter is one of those real-world experts paid to make prediction­s about where the economy is headed. He said his organizati­on regularly assesses how accurate those forecasts turned out to be.

“We’ve had some very successful years, but sometimes it does tend to be a very humbling experience.”

The good news is that Tetlock believes anyone can improve their predictive abilities by learning from how superforec­asters work. The book concludes with a section titled Ten Commandmen­ts for aspiring forecaster­s. The advice includes breaking complicate­d problems into simpler sub-problems (for example, starting with the population of your city when estimating the number of potential romantic partners); using reason to assign percentage-scored weighted probabilit­ies to evidence you’ve collected; and quantifyin­g what you don’t know and how much doubt that introduces.

Given stakes that run well into the many billions of dollars for everyone from business to technology to intelligen­ce, it’s not surprising that a private-sector spinoff from Tetlock’s project is recruiting volunteers for a new forecastin­g tournament through the website www.goodjudgme­nt.com. Tetlock said firms participat­ing in the tournament include a large Wall Street investment bank, which he declined to name because it hasn’t made its participat­ion public yet.

Tetlock said he’s cautiously optimistic we’re likely to see incrementa­l improvemen­ts in forecastin­g, with more experts adopting practices like attaching confidence intervals to their prediction­s. But even though the potential upside is huge, he said it’s likely to be a gradual process.

“I’m cautiously optimistic, but I wouldn’t be following the basic tenets of superforec­asting if I said I think it’s going to transform the world very rapidly,” Tetlock said. “That sort of thing happens extremely rarely. And predicting extremely rare events is hazardous to your superforec­asting reputation.”

If Bill Flack is very confident about some geopolitic­al event, he’s got the track record that would allow me to confidentl­y put money on it.

 ?? MARK MAKELA / FOR NATIONAL POST ?? University of Pennsylvan­ia psychology researcher Philip Tetlock says he’s cautiously optimistic that we’re likely to see incrementa­l improvemen­ts in forecastin­g, with more experts adopting practices like attaching confidence intervals to their...
MARK MAKELA / FOR NATIONAL POST University of Pennsylvan­ia psychology researcher Philip Tetlock says he’s cautiously optimistic that we’re likely to see incrementa­l improvemen­ts in forecastin­g, with more experts adopting practices like attaching confidence intervals to their...

Newspapers in English

Newspapers from Canada