Skip to content

Why Do Economic Forecasters Still Have Jobs?

Once again, the Wall Street Journal has published its annual ranking of economic forecasters. Using methods developed with the Federal Reserve Bank of Atlanta, the newspaper calculated which forecasters made the smallest errors in their prognostications for 2012. There’s just one problem: the results are meaningless.

The Journal’s ranking is based on four numbers: the unemployment rate in the last quarter of the year, annual growth in gross domestic product, and two measures of inflation. Almost every year, a different forecaster comes out on top.

This year, in a ridiculous twist, the Journal wrote about the winners with no reference to their prior records in the competition. Some forecasters may be better than others, but the only way to find out is to see which of them are consistently in the top flight.

To wit, the winner for 2012 was Arun Raha of Eaton, beating out 47 other contenders. But in last year’s rankings, he was 23rd out of 52. So, did he have a special insight in 2012, or did he just get lucky? There’s no way to know.

By the same token, dead last in 2011 was Mark Nielson of MacroEcon Global Advisers, but he made it up to 17th in 2012. Did he have an unusually bad year in 2011, or an unusually good one in 2012? Again, it’s a mystery.

If you average the Journal’s scores for 2011 and 2012, naturally some forecasters do better than others. In fact, the results make a pretty nice bell curve. The only ones who finish two standard deviations below the mean – the technical term for this is “in the toilet” – are William B. Hummer and Tracy Herrick.

Hummer works for Wayne Hummer Investments, the business his dad founded, so his job is probably safe in spite of his apparent incompetence. Herrick, on the other hand, should definitely be fired, right? Not necessarily – he won plaudits from the Journal in 2002 and 2004 for getting the numbers right.

Interestingly, the article featuring Herrick in 2002 actually does give some historical perspective on that year’s rankings. It points out that Gail Fosler of The Conference Board had come in last place after topping the rankings for two earlier years. The author credits her up-and-down record to her consistently bullish outlook on the economy.

Yet that statement should be a red flag, too. As the old saying goes, even a stopped clock is right twice a day. Fosler’s consistent bias made her overall forecasting record little better than a random guess. But perhaps she benefited from her bias; her big wins probably helped her credibility more than her big loss hurt it.

The lesson is simple: the only meaningful way to evaluate forecasters is over the long term, and even then past performance is no guarantee of future results. Show me a forecaster who comes close to the figures year after year, in recessions and booms, and I’ll agree he or she may offer value to investors and executives. Until then, you’ll do just as well guessing the numbers yourself as guessing which forecaster would do better.


Related

Up Next