A Comment on the Player Efficiency Rating

Posted on November 17, 2006 by


Before Malcolm Gladwell wrote his article on The Wages of Wins in the New Yorker he asked me a series of questions about our research. One of his question was, “How would you say your system differs (and is better than) Hollinger’s PER? I’ve noticed that some of his conclusions vary from yours.”

In writing his article Gladwell did not reference my answer to this question, primarily (and I am speculating here) because to explain how we differ from John Hollinger’s methods he would have to first offer readers of the New Yorker a discussion of how PERs is calculated. When you are limited to less than 2,000 words, such a detour is difficult to take.

A few days ago someone asked if I would comment on PERs in this forum. Here we have no limit on how many words we post, although I will try and come in under 2,000 words.

Let me begin this rather lengthy essay by making a few observations about how Hollinger calculates his Player Efficiency Rating (PER). For this, I am employing his discussion in his Pro Basketball Prospectus 2002.

Offensive and Defensive Efficiency 

As noted in The Wages of Wins, Hollinger begins in the same place where we start – specifically he notes that wins are determined by offensive and defensive efficiency. Offensive efficiency is determined by points per possession. Defensive efficiency is determined by points allowed per possession. Having each noted this relationship, though, we go in different directions. We employ regression analysis to determine the value – in terms of wins – of the various components of offensive and defensive efficiency. In other words, we go entirely where the data takes us. Hollinger does not statistically derive his values, but takes a different approach.

Why We Create Models

Having noted the importance of offensive and defensive efficiency, Hollinger proceeds to discuss a variety of measures of performance which serve as building blocks for PERs. These building blocks include Points per Shot Attempt, Pure Point Rating, Assist Ratio, Turnover Ratio, Rebound Rate, and Usage Rate. He defends these measures as “improvements” over existing metrics, often noting that the rankings that result evaluate players in a fashion consistent with what NBA observers would believe. In other words, his metrics fit what he believed about the players before he started.

Unfortunately, this is not the way science works. We do not begin with our beliefs, play with the numbers until our beliefs are confirmed, and then call it a day. Models are not evaluated in terms of whether they are consistent with what we believe, but in terms of their ability to explain what we purport to explain (and furthermore, provide predictive power).

This is point that is often lost in discussions of how to measure player performance in sports. Let’s think about baseball for a moment. People who study baseball would argue that batting average – hits divided by at-bats – is not as good a measure of performance as OPS – on base percentage plus slugging average. The reason for this conclusion is that OPS is a better predictor of runs scored and wins than batting average. In other words, OPS is superior to batting average because it does a better job of explaining how many runs a team scores. One would not argue that OPS is better simply because it ranks players in a fashion that fits our prior beliefs.

In examining Hollinger’s metrics, though, it is not clear that his measurements are trying to explain anything more than what he originally believed about the players. He offers various weights for the statistics the NBA tabulates, and at times it appears he is constructing these weights in terms of points scored. But he never establishes that the chosen weights allow him to predict how many points a team scores or how many games the team wins. Without knowing precisely what and how well PERs explains and/or predicts it becomes very difficult to verify Hollinger’s claim that this metric is “accurate.”

Measuring Shooting Efficiency

Looking at the specific weights Hollinger chooses we see another problem. In discussing the NBA Efficiency metric – which the NBA presents at its website – I argued that this measure fails to penalize inefficient shooting. The regression of wins on offensive and defensive efficiency reveals that shooting efficiency impacts outcomes in basketball. The ball does indeed have to go through the hoop for a team to be successful.

The same critique offered for NBA Efficiency also applies to Hollinger’s PERs, except the problem is even worse. Hollinger argues that each two point field goal made is worth about 1.65 points. A three point field goal made is worth 2.65 points. A missed field goal, though, costs a team 0.72 points.

Given these values, with a bit of math we can show that a player will break even on his two point field goal attempts if he hits on 30.4% of these shots. On three pointers the break-even point is 21.4%. If a player exceeds these thresholds, and virtually every NBA played does so with respect to two-point shots, the more he shoots the higher his value in PERs. So a player can be an inefficient scorer and simply inflate his value by taking a large number of shots.

But again, our model of wins suggests that inefficient shooting does not help a team win more games. Hence the conflict between PERs and Wins Produced. Hollinger has set his weights so that inefficient scorers still look pretty good. We argue that inefficient scoring reduces a team’s ability to win games, and therefore these players are not nearly as effective as people might believe.

Measuring Perceptions

Although PERs may not be the best measure of a player’s contribution to wins, it may offer a good measure of people’s perceptions of performance (after all that appears to be the author’s intent). An earlier version of NBA Efficiency was Robert Bellotti’s Points Created model. The simplified version of Points Created is the same as NBA Efficiency, except Bellotti’s model incorporates personal fouls. In defending this model Bellotti noted in 1992 that “the NBA’s Most Valuable Player has finished either first or second that season in my Points Created rankings.” In other words, Points Created is accurate because it mimics perceptions.

Hollinger has a simplified version of PERs called Game Score, and for the 2005-06 season I found a 98% correlation between NBA Efficiency and Hollinger’s Game Score measure. In sum, it appears that Hollinger, Bellotti, and NBA Efficiency are offering very similar statements about productivity. And one can show – via an examination of voting for the MVP award and the coaches’ voting for the All-Rookie team – that metrics like NBA Efficiency are capturing people’s perceptions of performance.

Evidence Contradicting Perceptions

There is evidence, though, that perceptions of performance in basketball do not match the player’s actual impact on wins. And surprisingly, the evidence has very little to do with Wins Produced. Consider the following:

  • Less than 15% of wins in the NBA are explained by payroll. Regressions are nice, but not always understood by everyone. So to further illustrate the lack of association between pay and wins I took another approach. Specifically I ranked the teams in the NBA last year in terms of payroll and then divided this ranking into five equal segments. The results revealed that the teams in the top 20% spent an average of about $78 million on players and won –on average – 35.7 games. The next 20% spent $61 million and won 42.5 games. In the middle we see teams that spent only $54 million and won 39.7 games. When we look at the last two groupings – the teams that spent the least – we see clearly the very weak link between pay and wins in basketball. The 20% of teams ranked just below the middle in payroll won 47.7 games while spending $47 million on players. And the teams at the very bottom of the payroll rankings spent less than $38 million on its players and won 39.5 games. Yes, the teams at the bottom spent less than half what the teams spent at the top and actually won more games.
  • Okay, pay and wins do not have a strong link. What does this tell us about player evaluation? In football payroll explains less than 5% of wins. But in football we also see very little consistency in player performance. So decision-makers cannot easily know how to spend money to ensure success in the future. A similar problem – though to a lesser extent – exists in baseball. In basketball, though, players are much more consistent across time. The correlation between a player’s per-minute Win Score this season and last season is 0.84. As we detail in The Wages of Wins, the consistency we observe in basketball exceeds what we observe in either baseball or football. Despite this consistency, though, payroll is still not strongly linked to wins. In sum, decision-makers have a greater ability to predict the future in the NBA, yet the payroll-wins relationship still remains very weak.
  • When we look at what determines salary we see the problem. The primary player characteristic that dictates wages in the NBA is scoring. Shooting efficiency, rebounds, turnovers, and steals – factors that all impact outcomes – are not strongly linked to player pay. Given this evidence, we think players are evaluated incorrectly in the NBA. Too much emphasis is placed on scoring, and not enough on all the other factors that impact outcomes.

This is one of the more important stories we tell in The Wages of Wins. Our examination of payroll, salaries, and performance all suggest that players are evaluated incorrectly. Our study of metrics like NBA Efficiency – and now Hollinger’s PERs – indicate that the mistake lies in the valuation of shooting efficiency. Inefficient scorers – like Allen Iverson – are paid for more than their contribution to wins justifies. Players who do not score – but offer other significant contributions to wins – tend to be underpaid.

Is this Important?

A few days ago CNNSI.com reported that new documents have been found that shed light on the invention of basketball by James Naismith. Apparently the game that might have inspired Naismith was called “Duck on a Rock.” A bit more than a century later we are engaged in a debate about how to measure a player’s performance in Duck on a Rock, version II. When we think about it this way, perhaps this is a very trivial issue.

From the perspective of economics, though, the story we tell is important (at least, I think so). What we argue in The Wages of Wins is that decision-makers – even when they have clear objectives and an abundance of information – still can make the same error over and over (we talk about why this can happen in the book). Such a story has clear implications for how we model human behavior in economics. And that is true, even if those implications come from a discussion of how well people are playing Duck on a Rock.

– DJ

Our research on the NBA was summarized HERE.

Wins Produced and Win Score are Discussed in the Following Posts

Simple Models of Player Performance

Wins Produced vs. Win Score

What Wins Produced Says and What It Does Not Say