…but I’m never wrong! Or at least that is how the old adage goes. This may not be the exact mantra of modern day economic “prophets”, but it isn’t too far off. It seems like (and the study we share proves) a forecaster only has to get one big prediction right and then their word is as good as gold.
Today’s show is based on an article in the Boston Globe titled “That Guy Who Called the Big One? Don’t Listen to Him”. This article highlights how “Dr. Doom” (Nouriel Roubini) correctly predicted the 2007-2009 economic downturn. As you can imagine, predicting the largest economic downturn since the Great Depression is a pretty noteworthy accomplishment. Therefore, you would assume someone with that much economic knowledge and foresight is likely to have had a pretty impressive track record. Well, according to Joe Keohane, that isn’t always the case.
Mr. Keohane went back and compiled some of Dr. Roubini’s past predictions and found that he was quite often wrong. The article notes that in 2008 he predicted ‘hundreds of hedge funds were on the verge of failure and that the government would have to close the markets for a week or two in the coming days to cope with the shock’. Obviously he missed the mark on this one. In 2009 he was convinced that oil prices would stay below $40 and that car companies should begin increasing production on SUVs. As time has shown, quite the opposite happened. His final recent prediction was that the S&P 500 would fall below 600 in 2009, a year when that particular index actually returned 23.5%.
What does Mr. Keohane attribute this lack of consistency to? Quite simply put:
“The people who successfully predict extreme events, and are duly garlanded with accolades, big book sales, and lucrative speaking engagements, don’t do so because their judgment is so sharp. The do it because it’s so bad.”
Two economists, Jerker Denrell and Christina Fang, decided to do a study to determine if, in fact, there really are people who are better at predicting the future than others. As you listen to the show, we share how they actually conducted their research. What they found was that economists who had a better record at calling extreme events actually had a worse record overall. We go on in the show to explain how these results don’t only appear in economic forecasting, but also manifest in weather and political forecasting.
As you listen, we explain other findings by Denrell and Fang and touch on why and how our judgment can often be warped by our bias toward success. We go on to share a listener email and provide what we feel is a unique insight on how your current housing situation may not truly be as bad as it seems, and even though it may not be obvious, there are ways to potentially “turn your lemon into lemonade”.
To close out the show, we share a fun article in Consumer Reports titled “Save by Cutting Waste”. Well, tightwad Tod, you have even outdone the Money-Guys. Listen out for ways to save on: bananas, bar soap, condiments, cookies, cornflakes, hair gel, honey, laundry detergent, pump-top hand lotion, shampoo, sugar, and toothpaste.
If you make enough predictions, some of them are bound to be correct just by sheer chance. That doesn’t make you a good predictor of events. As you say, everyone forgets the other 99% of the predictions you made that were wrong. Show me someone who can repeatedly make predictions significantly better than chance and then I’ll be impressed.
The other statistical phenomenon to be aware of (in addition to the mentioned base rate fallacy), is “regression towards the mean”. This says that once you have a single event well outside of the normal range, the next event is far more likely to be closer to the average range. This will keep happening until the event falls within the normal range. The event is “regressing toward the mean” (average value). It doesn’t imply than unusual events can’t happen repeatedly, it just says that a series of events outside of the normal range would be even more unlikely than a single unusual event. For example, in 2008 we saw huge drops in the market. Then in 2009 it bounced way back bringing the 2-year return closer to the mean. To have a huge drop in 2008 and then again in 2009 would be extremely unusual. Then in 2010 we saw more normal returns, bringing the 3-year return even closer to the mean.
Regression to the mean happens in all complex systems, financial markets, weather, traffic patterns, etc. They all follow the standard bell-curve distribution, which dictates that most events will be “average.” Outlying events do occur, but they are inherently rare and therefore less likely to occur than average events and even less likely to occur repeatedly.
Back to the point of the podcast, if someone correctly predicts an unusual event, it would be foolish for them to predict a second unusual event occurring after it. It is far more likely that the next event will be more average, and no one is impressed when someone predicts something typical happening (even if they predict it correctly).