Sunday, March 24, 2013

Nate Silver's The Signal and the Noise

A couple months back, on a weekend pilgrimage down to Powell's Books in Portland, I picked up a copy of Nate Silver's The Signal and the Noise, Why So Many Predictions Fail-but Some Don't. It was the subject of an upcoming lunch-time book club at work, so I thought I'd give it a try.

Silver is known for prescient election forecasts on his FiveThirtyEight blog. Using methods based on aggregation of independent polls, he correctly called all 50 states in the 2012 U.S. presidential election and 49 out of 50 in 2008. Prior to politics, Silver dabbled in the applied end of predictive statistics in finance, sports handicapping, and online gambling.

Errant predictions are shown to be at the heart of many modern problems from the 2008 financial crisis to unreliable science (Why Most Published Research Findings Are False).

Silver draws links to several well-known or should-be-well-known people, and seems to have met an amazing number of them: Paul Krugman, Google Chief Economist Hal Varian, NYU political scientist Bruce Bueno de Mesquita, Chaos theorist Edward Lorenz, statistician Andrew Gelman, behavioral economist Richard Thaler, psychologist Daniel Kahneman, financial commentator Henry Blodget, security expert Bruce Schneier and enough others to fill pages of often humorous end-notes.

Geek-lit classics Godel Escher Bach and Hitchhiker's Guide to the Galaxy are duly referenced along with classic classics like Shakespeare and the Bible.

That which has been is what will be, That which is done is what will be done, And there is nothing new under the sun. ... There is no remembrance of former things, Nor will there be any remembrance of things that are to come By those who will come after.

Ecclesiastes 1:9,11

The famous saying is that prediction is hard, especially about the future. The ways in which predictions succeed or fail are illustrated with anecdotes drawn from twelve areas of application.

The financial crisis

High on the list of causes of the 2008 financial crisis are hubris, overconfidence with untested methods and conflicts of interest. The technical flaw hidden in mortgage backed securities was an assumption of independence - the assumption that a default on one loan was uncorrelated with the performance of another loan. That assumption was undone by an "out of sample" event, the correlated decline in the national housing market that precipitated the crisis.

Silver doesn't go far enough in acknowledging the role of corruption for Cathy O'Neil (aka Mathbabe) who sees Nate Silver as led astray by ignoring politics and incentives.

"He spends very little time on the question of how people act inside larger systems, where a given modeler might be more interested in keeping their job or getting a big bonus than in making their model as accurate as possible."

With an insider's view based on her experience as a Wall Street quant, O'Neil sees outright crookedness, perverse incentives and lack of oversight as bigger contributers than bad luck or even incompetence. (Listen to Cathy O'Neil on EconTalk.)

But, Silver doesn't ignore ethics or incentives, having this to say about triple A rated toxic assets. "S&P and Moody’s had taken advantage of their select status to build up exceptional profits... The agencies had little incentive to compete on the basis of quality... Ratings quality was the least important factor driving the company’s profits." Silver quotes George Akerlof's Market for Lemons, which shows "that in a market plagued by asymmetries of information, the quality of goods will decrease and the market will come to be dominated by crooked sellers and gullible or desperate buyers." But, the chapter is called a catastrophic failure of prediction, when there's a good argument that the most important failings where elsewhere.


As a sideshow to the 2012 presidential election, the right leaning Fox News went on confidently predicting republican landslide well after there was sufficient evidence to the contrary. Berkeley professor of psychology and political science Philip Tetlock, calls this ideologically biased reasoning "a blurry fusion between facts and values". The mistakes highlighted here are overconfidence, loss of objectivity and confirmation bias.


The (dreaded) sports chapter presents another tale of competition between quantitative objectivity and human judgment - the contest between Moneyball statistics and the gut instincts of talent scouts. This time Silver is conciliatory, finding merit on both sides. Our brains evolved to make decisions under conditions of uncertainty and insufficient information. Statistics can help correct for those predictably irrational inbuilt biases in human reasoning. But intuition may glean scraps of information that don't show up in the numbers.


Meteorology has the benefit of fairly advanced models based on real physics. Technology also helps; more computing power enables higher resolution models. But, non-linearity makes these models highly sensitive to initial conditions, a hallmark of chaotic systems and a challenge to forecasters.

Weather forecast turns out to be one of the success stories of the book. Humbly realizing the imperfect nature of their craft, "they express their predictions in terms of probabilities: there’s a 40 percent chance of rain tomorrow."

Allan Murphy's What is a good forecast?, An Essay on the Nature of Goodness in Weather Forecasting offers three dimensions of quality in forecasting: accuracy, honesty, value. Murphy advocates an admirable set of principles: honesty and humility, a frank assessment of the limitations of the methods, and learning from the inevitable missed calls.


Earthquakes have a long record of frustrating attempts at prediction. Either earthquakes are inherently unpredictable or we can't measure the truly predictive variables. That hasn't stopped people from finding what appear to be patterns. When we mistake noise for signal fooling ourselves into seeing patterns in random data, we are committing a statistical sin known as overfitting.


Silver reserves a lot of criticism for economists, perhaps surprising for a graduate of the Chicago University's Economics department and former student at the London School of Economics. His complaint is that much economic analysis is presented without a fair accounting for the large uncertainties involved, for fear that an honest evaluation of uncertainty might lead to diminished authority and influence.


Predictions have value in that they can inform decision making. If you have the job of the CDC, overreacting to a potential outbreak can be costly. Having a false sense of security might result in being unprepared for the real thing. Thinking in terms of probabilities allows more nuanced weighting of both consequences and likelihood. This can be formalized by integrating the economic concept of utility with probability.


The two chapters on Chess and Poker draw out the distinction between games of complete information like Chess and games like Poker where chance and hidden information play a significant role. Chess is hard simply based on combinatorics - the size of the search tree.


Poker, by contrast, is hard because of what we don't know. What cards are my opponents holding? What cards will come up on the next draw? Good poker players (not that I'm among them) can infer likely hands based on the choices made by other players. Good players also deviate from strictly correct play just enough to confuse these inferences.

Financial markets

As in poker, second order effects play a big role in financial markets. Traders and their algorithms try to second guess each other, making predictions, not just about the underlying securities but also about the beliefs of other investors. These predictions have the power to move markets.

In Sliver's words, "efficient markets meet irrational exuberance". The market consists of two tracks running simultaneously on the same road. First there is the signal track, the rational market well grounded in fundamentals that prevails in the long run. "Then there is the fast track, the noise track, which is full of momentum trading, positive feedbacks, skewed incentives and herding behavior. Usually it is just a rock-paper-scissors game that does no real good to the broader economy—but also perhaps also no real harm. It’s just a bunch of sweaty traders passing money around."

Naturally, the Wall Street Journal's reviewer raises objections to that characterization. Myself, I'm skeptical of the no-harm-done part. Predatory practices like high-frequency trading prosper at expense of everyone else in the market.


Noise can obscure the true signal and noise can be used by those with ulterior motivations to cast doubt on valid conclusions. In the politicized debate around climate change, it's not unheard of for vested interests to support their case by cherry-picking the facts to support foregone conclusions. Clearly, this is no way to get at the truth. Silver wisely points out the error made by many on the scientific side in assuming that the political problem can be solved by persuading more people on the basis of scientific merit. That won't work. In a political cat-fight, scientific validity means very little.

"The dysfunctional state of the American political system is the best reason to be pessimistic about our country’s future. Our scientific and technological prowess is the best reason to be optimistic."


An interview with Donald Rumsfeld kicks off the chapter on the unfamiliar and the improbable. Who better to talk about the "unknown unknowns"? Unique unlikely events like the 9/11 attack on the World Trade Center exist in the unexpectedly fat tails of the probability distribution. These are Nassim Taleb's Black Swans.

The uniqueness of these rare events means they are likely to be outside the scope of standard models. Like earthquakes, their frequency falls exponentially in proportion to their severity. Also like earthquakes, the specifics are hard to predict.

Thomas Bayes

Principles of a forecaster

In the statistical schism between frequentists and Bayesians, Silver comes down on the Bayesian side. The book refers in several places to the process of adjusting beliefs in a measured way with the arrival of new evidence. Silver approaches forecasting with a few principles for making better predictions:

  • Think probabilistically: "Instead of spitting out just one number and claiming to know exactly what will happen, I instead articulate a range of possible outcomes. [...] [A] wide distribution of outcomes represent[s] the most honest expression of the uncertainty in the real world."
  • Have the humility to acknowledge uncertainty and learn from mistakes: "Uncertainty is an essential and nonnegotiable part of a forecast." & "...failing to change our forecast because we risk embarrassment by doing so...reveals a lack of courage."
  • Seek consensus: "aggregate or group forecasts are more accurate than individual ones." Weigh evidence objectively rather than selectively.

Silver has an amusing take on the old fox and hedgehog story. Foxes are better forecasters because of their tolerance for complexity and ambiguity, openness to multiple approaches and willingness to acknowledge error. Hedgehogs have one big idea, a simple explanation that ideology tells them must be right. Silver is a Fox, of course. (Who wants to be a hedgehog?).

I enjoyed the book and recommend it to anyone with an interest in stats, data science or machine learning. Silver tells good stories and packs them with things to think about later. But, if Silver does have one big hedgehog-ish idea, Bayes is it.

Related stuff