Can statistical analysis best human judgment? This is one of the arguments that we make in The Wages of Wins. And we have seen that this one argument has been greeted by a bit of skepticism from some of our readers (and others who have never read our book). Typically, the sentiment expressed is as follows: How can a statistical model yield decisions that are better than those reached from a lifetime of experience and personal observation? Surely the skeptics argue – GM’s in the NBA are better evaluators of playing talent than a statistical model built by some economists who have never played college basketball? The skeptics seem to conclude that these economists are just arrogant, egotistical idiots. Yet, this is our argument – and it appears we are not alone in arguing that statistical analysis can improve upon the lessons learned from life experience.
The Financial Times recently ran a piece on two political scientists (Andrew Martin & Kevin Quinn) who claim they can predict how Supreme Court justices will vote. Like us, they met some skepticism. One of the skeptics, Ted Ruger (a law professor from the University of Pennsylvania), challenged Martin & Quinn to compare the accuracy of two different methods of predicting Supreme Court cases. Martin & Quinn would use their model, and Ruger would rely on 83 legal experts. Each attempted to predict opinions for cases that would come before the Supreme Court in 2002 (and had yet to be decided). I bet you will never guess how this ended? Yes, the statistical model correctly predicted the Supreme Court opinion 75% of the time, while the legal experts were only correct 59.1% of the time.
Likewise, a regression based model is demonstrating superior assessment of whether criminals will re-offend. The Rapid Risk Assessment for Sexual Offender Recidivism model gives a criminal a score (similar in concept to a credit score) and if a criminal has a point score of four or greater then there is a 55% chance of committing another sex offense. The state of Virginia uses this model in part when making parole decisions (along with human discretion).
In another completely different area, Mark Nissen, professor at the Naval Postgraduate School concludes that the shift is toward computer based systems where human discretion is eliminated in procurement decisions.
Here’s the rub – we are perceived by some as being arrogant because we advocate a decision making model that is based on statistical analysis. Yet there is a growing pile of evidence that in many areas that shows experts who ignore statistical analysis tend to make poorer decisions. Given this result, shouldn’t we conclude it’s more than a bit arrogant for “experts” to ignore what can be learned from statistical analysis?
– Stacey
Here is the Financial Times reference:
The Financial Times (September 1st/2nd) “How computers killed the expert” p. 1 & 2 of Life & Arts.
Pete
September 29, 2007
You can find the Financial Times piece online here:
http://www.law.yale.edu/intruders/5493.htm
Brian
September 29, 2007
There was another similar article in the WSJ Online recently.
http://blogs.wsj.com/numbersguy/grading-the-forecasts-of-experts-182/
Overconfidence in one’s ability to predict future events is one of those universal human biases. There is also “hindsight bias” in which people tend to fool themselves into believing past events were more predictable than they were– “Of course the Spurs won the championship. They were obviously the best team,” for example. Sports gambling depends on these faulty heuristics to stay in business.
My hunch is that when someone is an expert, his overconfidence outstrips his added abilities to predict from his expertise. Plus, experts are never paid to say “I don’t know.” So their self-interest is to tend to appear more certain about the future then they should be.
Mathematical models are free of human bias. They not only can tell you a likely outcome of an event, but they can also estimate a realistic confidence level.
But math models can sometimes get a bad rap. In almost every NCAA bastketball pool, there is always some lucky fool who picks tons of winners. He would probably beat a really good prediction model, but the other 99 people in the pool wouldn’t. So some people say, ahah–your math is all for nothing.
Bill Schauer
October 1, 2007
Just for the record mathematical models are free from SOME types of human bias but not others. All models rely on some assumptions and bias can hide in these assumptions and frequently does. The type of bias that models are free from is that if you give them the exact same inputs it will produce the exact same outputs. Human experts often take in to account different factors for each analysis. As in the basketball expert who will consider the injury situation for a team one time and not for another or change the weights of various factors from one analysis to another.
Slobodan Chutzpah
October 2, 2007
As Bill Schauer points out, mathematical models are indeed free from certain types of bias but not others.
When it comes to using them in sports, I don’t see no great ethical problems, but applying mathematical models to human activities in other areas – such as deciding questions regarding their freedom, rights, employment, etc. – certainly presents a more treacherous path that shouldn’t be trod lightly.
It is always very dangerous to extract human traits such as empathy from the equation when dealing with humans. I’m digressing a bit, but this is relevant in regard to the larger question.
Vinay
October 2, 2007
I guess there’s a smart intuition to this for basketball as well though.
When an expert looks at assessing a player, numbers still play a role in their own calculus – everything from points scored to steals and turnovers. However, the ‘non-mathematical’ experts put those in a calculus based around their own (often extensive) personal experience. Whereas a stats based model is just more objective in the sort of experiential context it places different numeric values in – ie you’re looking at more information sets than an expert.