NBA Babble and Win Score had added a couple of new features. You now have the following options in viewing the Win Score of NBA Players.

Win Score stats by team

Win Score stats by day

Win Score stats by player

Win Score stats for every game, every player

Of course Jason Chandler’s website does more than provide a player’s Win Score. It also reports Position Adjusted Win Score (PAWS), Position Adjusted Win Score per minute (PAWSmin), and Wins Produced per 48 minutes [WP48]. For all players you can look at this for the season or by individual game. So this is a really neat site for those interested in seeing more of the analysis introduced in The Wages of Wins.

As you look over the data, you will see some differences between what Chandler reports and what I report in this forum. Specifically, Chandler does not calculate WP48 in the same fashion and hence reports slightly different numbers.

To avoid any confusion, I thought I would briefly review how we calculate WP48, as it is reported in The Wages of Wins. Along the way I will answer a few comments from critics and show that the simple approach Chandler takes give you virtually the same results we report.

**The WOW Approach to WP48**

*Connecting Wins to Offensive and Defensive Efficiency*

We should begin with the very first step. Both John Hollinger and Dean Oliver argue that wins are determined by a team’s offensive and defensive efficiency, or how many points a team scores and surrenders per possession. I have written a paper entitled, “A Simple Measure of Worker Productivity in the National Basketball Association” (which was a working paper when the book was published but should be finally published later this year). This paper demonstrates that what Hollinger and Oliver assert is true. Via some fairly simply math, one can show that wins are indeed all about offensive and defensive efficiency.

I note this because the first step in building an empirical model is to establish theoretically the relationship between what you are trying to explain and what you think does the explaining. Hollinger and Oliver both asserted that the efficiency measures explained wins, but neither attempted to show that this must be the case. In the aforementioned article I try and show that I think the math is clearly on their side.

*Blocked Shots and Assists*

Once we statistically link wins to offensive and defensive efficiency, we then can determine the value, in terms of wins, of points, rebounds, steals, field goal attempts, free throw attempts, turnovers, and personal fouls. What is missing is blocked shots and assists.

Of these last two factors, the value of blocked shots is the easiest to determine. Part of defensive efficiency is the number of made field goals by the opponent. One can show that blocked shots impacts how many shots the opponent makes, and by estimating this relationship you can thus connect blocked shots to wins.

Assists are bit trickier. The basic theory behind an assist is that one player is taking an action the increases the productivity of a teammate. We find that the empirical evidence supports this claim. As your teammate’s assists increase, your overall productivity rises. We can use this relationship to estimate the value of an assist.

Now it is important to see how we incorporate assists into our model. As we detail in The Wages of Wins, the value of an assist represents a transfer between players. What we do is subtract the value of assists from each player, and add back that same value to the players who get the assists.

It is important to note that the value for assists that we use is not determined arbitrarily, but is determined by our model of individual player productivity. Now one of our critics noted that you could change the value of an assist and not alter our forecast of wins. Of course the critic fails to offer an alternative model to arrive at the value of an assist. Rather, this person simply asserts that changing assists does not change the forecast.

It certainly is clear, if you have read The Wages of Wins, that assists are not used to forecast wins. Our forecast of team wins from the Wins Produced model appears on page 110 of the book. Our discussion of assists occurs on page 117. From this it is obvious that assists are not necessary to forecast wins.

Why is this? Once again, assists are a transfer of credit from player to player. We are looking at production after the game has happened. The production is already there. The assists just tell us something about who should get credit for that production.

The fact that assists are not used to forecast wins is quite clear if you read The Wages of Wins. Unfortunately, our critics either cannot read, or are not interested in reporting what we do accurately (more on that in a moment).

*Calculating WP48*

Once we have ascertained the value of each statistic, we can now calculate WP48. To do this, you need the following three elements.

- A player’s statistics, valued in terms of the impact these statistics have on wins.

- The average performance at a player’s position

- The value of team statistics, an adjustment that allows us to account for the quality of a team’s defense and the pace the team plays.

We note in The Wages of Wins, that a player’s value is primarily determined by the first two elements, or a player’s statistical production relative to the average performance of a player at that position.

Despite making this clear in the book, there is still some controversy surrounding the last step, or the team adjustment. It has been suggested in some circles that the team adjustment is a giant fudge factor. As we note in The Wages of Wins, and as I have noted in this forum, the team adjustment is not what drives Wins Produced.

To see this point, consider PAWSmin. PAWSmin is simply Win Score per minute, adjusted for the position the player plays. PAWSmin does not have any team adjustment at all. WP48, as noted, does have a team adjustment. If the team adjustment were truly that important, these two values would be very different. But as I noted a few weeks ago, **the correlation between WP48 and PAWSmin is 0.994**. Yes, there is a 0.99 correlation between the player evaluation with and without the team adjustment.

This result indicates quite clearly that player performance is indeed all about what the player has done relative to the player’s position. The team adjustment is not driving our player evaluations.

**The NBA Babble Approach**

All the steps I describe to calculate WP48 take a bit of time. I have reached a point where I can download the data for a team from NBA.com and determine each player’s WP48 on that team in about five minutes. To update this after every game – which Jason Chandler wishes to do – would be very time consuming. There are 30 teams. If all played the night before it would take you 150 minutes to update the stats.

Fortunately, there is an easier way. Because PAWSmin and WP48 are essentially the same, you can estimate WP48 with the following formula:

WP48 = 0.104 + 1.621*PAWSmin

This formula is obviously quite a bit easier than all the steps I described earlier. As you scan Chandler’s calculations you will see a great deal of similarity between what he reports and what I report when I calculate WP48. Again, I use the actual values of the statistics in terms of wins and the team adjustment. Chandler uses the simple equation reported above. The results, though, are quite close (with the big differences actually driven by how Chandler and I consider position played).

**Persistence of the Team Adjustment Critique**

On page 108 of The Wages of Wins is the following three sentences:

*“In general, the team statistical adjustment is quite small for each player and therefore this adjustment does not substantially alter our rankings of players across teams. To illustrate this point, the correlation coefficient between player production unadjusted for team statistics and then adjusted for team statistics is 0.99. In simple words, whether you adjust for the team statistics are not, the player rankings are essentially the same.” *

So we note the unimportance of the team adjustment in the book. We have noted this more than once in this forum. Yet last week, at NBA Babble and Win Score, there was the same critique in the comments.

Why does this criticism keep appearing? It’s important to note that the group that most often attacks The Wages of Wins is associated with the plus-minus approach to evaluating NBA players. This group of people is in the business of selling a non-box score based measure of performance to NBA teams. The premise behind their business is that the box score statistics the NBA tracks do not allow one to evaluate NBA players. The Wages of Wins suggests, quite clearly, that the box score statistics do tell us a great deal about the productivity of individual players.

Unfortunately if you are in the business of selling a non box score based method, The Wages of Wins presents a significant problem. The analysis in The Wages of Wins is essentially free. The teams already have the box score data. The book, which was published by an academic press, can be checked out for free from a university library (or perhaps your own public library, or perhaps you can borrow from a friend). Given this reality, certain elements in the plus-minus crowd (and I am not referring at all to Wayne Winston, the originator of this approach who has always been perfectly pleasant in various e-mail exchanges) feel the need to attack both The Wages of Wins and its authors. And given the money involved this seems understandable. After all, if the box score statistics can tell you who is “good” and “bad”, then a business based on a non box score approach is clearly threatened.

**Summarizing Wins Produced Again**

My sense at this point is that we have addressed the primary critiques of Wins Produced. Let me close by re-iterating what I think our model is and is not. Wins Produced is a measure of how productive a player has been in the past. It is primarily driven by a player’s ability to shoot efficiently, rebound, and create and avoid turnovers (again, relative to what the average performance at a player’s position). It is designed to be both accurate and simple and hopefully furthers our ability to use the data generated by the NBA to investigate various aspects of economic theory. In other words, Wins Produced is a research tool.

Now that we see what Wins Produced is, let me state again what it is not. Wins Produced tells us how productive a player has been, but it does not tell us why a player was productive. In this sense, it does not replace coaching or scouting. In my view, the job of a coach or scout is not to tell us how productive a player has been (the data tells us that), but why the player was productive, and furthermore, whether or not there is anything one can do to change what a player does on the court.

Our research has shown that for the most part, players are what they are. Still, it is possible for player performance to change. Factors that can cause a player to be more or less productive include the productivity of teammates, injuries, stability of a team’s roster, and coaching. Yes, coaching has been found to statistically impact performance. What is not clear is how coaching matters? Hopefully, as we continue in our research into the economics of sports, further light can be shed on that question.

– DJ

*Basketball Stories*

Ben

March 21, 2007

Since, if I understood the above (and the related footnote in the book) correctly, you did a player by player productivity analysis to determine the mean value of an assist, can you provide a distribution of the individual results? My intuition tells me that an “assist” varies significantly in value by player. I’d like to know I’m wrong, or not.

Ben

March 21, 2007

I may not have been clear, I’m not asking for masses of data, I’d just appreciate a general idea (which I assume you already have) of how much the productivity contribution of an assist varies by player (the assister).

dberri

March 21, 2007

Ben,

The model links individual player productivity to the assists by all teammates. So what you are asking is not possible given the way the model is constructed.

wondering

March 21, 2007

so why do you have the team adjustment at all then? are you saying that Wins Produced without the team adjustment still estimates 97% of team wins?

Jerry

March 21, 2007

Wins Produced is not as accurate at the team level without the team adjustment. As with the above commenter I would like to know exactly how big this difference is. It may be quite small in which case I would be curious why the team adjustment was included, but it would clearly refute the critics. Either way, I would like Mr. Berri to stop quoting the same correlations over and over, and address the question of how well Wins Produced WITHOUT a team adjustment predicts TEAM WINS.

dberri

March 21, 2007

wondering,

The model estimated connects wins to offensive and defensive efficiency. These efficiency measures are comprised mostly of factors tracked for individiual players, but also includes factors only tracked for the team. It is these latter factors that comprise the team adjustment (which mostly captures team tempo and team defense). To leave these factors out would leave your model mis-specified, which is a big no-no in estimating a model.

A model that is missing independent variables (which would be the case if the factors in the team adjustment were excluded) would not explain as well as a model that is fully specified. So no, you do not explain 95% of wins without the team adjustment. Of course, without the team adjustment you have a model that is mis-specified and does not make much sense. Again, as I said in the post, models have to be theoretically sound.

Ultimately what we are trying to do is build a model that is theoretically sound, explains wins, and of course, allows us to evaluate individual players. That is what Wins Produced does. This focus on the team adjustment is incorrect. With and without the team adjustment, the story told about players is essentially the same.

Yes, the team adjustment has to be in the wins model. But no, it does not impact what we are saying about the players.

jake

March 22, 2007

Very interesting. So what is the exact figure of how well Wins Produced without a team adjustment predicts team wins? For the sake of intellectual honesty please make this information public.

Mark T

March 22, 2007

I value this method of analysis quite a bit. But I encourage you to refrain from ascribing bias to those who challenge it. There are some fascinating statistical conundrums out there between the two models (like, why does Jason Collins have a consistently higher plus-minus than Jason Kidd, or why does Ben Gordon have a plus/minus 5.2). And it is legitimate to ponder how a model that does not directly capture the ability of a defender to shut down the opponent’s best player without a block or steal fully values the defensive player’s contribution to a victory, although I understand your response and the statistical support behind the response. There is room for legitimate debate, but the debate will degrade if it becomes one about motives. Keep up the great work and thanks for the frequent updates to this blog!

anon

March 22, 2007

Mark T,

I think the tone stems from the nature of critique. The plus-minus community hasn’t been that diplomatic in their criticism. There certainly exceptions. But spend some time at the APBR forum investigating what people have had to say about Berri and Wages and you’ll find that berri’s response is tame relative to some the nasty things that have been said about him and his motives.

Jerry

March 22, 2007

I would say that things have gone both ways on this. A lot of members of the well established and public APBRmetrics community were upset by Mr. Berri’s outsight refusal to discuss his work in the way every other important APBR or SABR metrician has. They were also somewhat upset by the condescending tone Mr. Berri often employs.

That said, certainly some of the critiques of Mr. Berri’s work have been less than civil and I can see how he got defensive. I think if both sides would agree to participate in open and honest intellectual debate in a public forum (http://sonicscentral.com/apbrmetrics/viewforum.php?f=1) it would be best for everyone.

wondering

March 22, 2007

I understand your argument, Dave, but I’m still concerned about the model for the following reason. You use a regression to determine the weights given to each statistic. I tried drastically altering these weights — of course my player ratings changed drastically, but what was interesting was that, after I inserted the team adjustment, my predicted team wins was the same.

So how are you judging correctness here? If you use team wins, then who is to say that my weights are inferior to yours? Or, who is to say that your player ratings are more correcet than mine?

dberri

March 22, 2007

wondering,

Just a few questions, but when you say “I inserted a team adjustment”, exactly what do you mean? The team adjustment we use is simply the statistics tracked for the team that are not assigned to a player. This is restricted to field goals made by the opponent, opponents turnovers (that are not steal), and team rebounds. That is it. These factors account for how well a team played defense and also for tempo. Because these are only tracked at the team level we end up arguing in The Wages of Wins that your ability to play defense is equal to the average ability on your team. If a player is better than his team average, our method understates his productivity. If you are worse, it overstates.

If someone came up with an objective measure of defensive ability, then we would not need a team adjustment at all (except for team rebounds and team turnovers, which are small).

What I need to know is what team adjustment you used? What theoretical support do you have for creating your team adjustment? If I collected more data (for example, an objective measure of defensive ability) can I eliminate your team adjustment?

This is a problem with doing statistical analysis. You can’t just throw data together and say you have a model. Models begin with some kind of theoretical structure. Our theory, which I can show has validity, is that wins are determined by offensive and defensive efficiency. What theory are you using in building your model?

wondering

March 22, 2007

Hmm… I could have sworn I read that your team adjustment accounts for more than just three factors. I thought it also included PFs and assists. So would that mean that if I make a drastic change to the weight given to assists (i.e. change it from 0.5 to 15), and make the corresponding team adjustment that you officially use, I SHOULD NOT receive the same number of predicted wins? I was under the impression that assists were just factors of offensive and defensive efficiency. Obviously, changing the weight of assists changes the offensive and defensive efficiency, but I thought predicting wins would be the same since the weight change is “undone” by the adjustment.

I’m not sure I agree with your last point. I do agree, models begin with theoretical structure, but if I can conjure up an arbitrary model that produces the exact same results as a model with theoretical basis, how can you say that the arbitrary model doesn’t have validity? We’re getting the same results. To me, that implies that the phenomena being tracked (i.e. predicted wins) can be tracked from a multiple of different perspectives equally well. And that the perspective being used here is nothing special.

Lastly: just for curiosity’s sake, can you tell us how well WP does without the team adjustment in predicting wins? Thanks.

John Smith

March 22, 2007

I am also curious how well Wins Produced without a team adjustment predicts team wins. While I appreciate that Mr. Berri responds to questions in his blog one cannot help but notice he is selective in his responses to direct questions.

Secondly, his choice of lumping critics into a plus-minus community and then questioning their financial motives is troubling on two levels. First, there is no cohesive community on basketball matters that focuses on plus/minus (and certainly not one that is making a material amount of money from the work). If he means the APBR community, that community is filled with generally volunteers of individuals with diverse opinions on all matters and plus/minus is but one topic among many. Second, by questioning financial motives he is employing an ad hominem argument against a strawman group which really has no place in an academic discussion – let’s keep it to the merits.

Harold Almonte

March 23, 2007

Of course Win Produced doesn’t need team adjustments. Win Score could need it because stats are weighted in terms of possessions with more or less arbitrariety than other linear ratings, but in WP these are weighted in terms of team wins. Individual players don’t win games, teams do it. The win regression is already the adjustment, not only a team adjustment but historical adjustment.

About the much argumented team defense (and I would add team scoring), I sustain there are not whole individual acomplishments in basketball, but some stats are more shared than others. The link assist-assisted FG and Def Reb-teammates defense are the most obvious, even “own shots” are not as own and without help as “shot creators” defenders sustain. Then, ratings should not have problems to predict team wins (that be points or wins differential) that’s easy in basketball, but the weighting of players inside teams according to their stat-strenght or weakness (sometimes you dont need math for that), and this is not a ratings problem, but a box score stats problem, that would need a reinvention, a lot of scorekeepers and play by play reviewers. Too much complicated to be solved by a “designed to be simplistic” model.

Jason

March 23, 2007

I think it’s important to remember that science is a process and the likelihood that any model explaining anything is either complete or not possible to improve is vanishingly small. All roads that lead to Rome were not built in a day. Science is a process. It is not an answer.

It is also important to remember that a model can capture *most* of the data in *most* circumstances but still have exceptions. If it is a statistical model that aims to approximate something these exceptions do not necessarily invalidate the model.

I have had exactly the same type of questions that Mark T has had. How can it be if someone like Ben Gordon performs poorly (as captured by WP or WS) that his *team* seems to perform better (which is what +/- apparently measures) when he’s on the floor? How is it that Troy Murphy has had such a good WP figure, but his presence seemed to coincide with being outscored dramatically and losing more games than you win? One possibility is that there’s random noise in either +/- or WS/WP (or both) and that in larger samples this starts to work out. Another is that WP/WS doesn’t capture *everything.* And I doubt very much that Dave would argue that it does. Not capturing everything and having exceptions doesn’t make a model wrong or present reason to ignore it outright. It is possible to get a job when economic indicators show a downturn in the economy and an increase in joblessness. It’s also possible that Ben Gordon’s influence on the game doesn’t always show up in the box score.

But remember: anecdotal exceptions do not invalidate a model.

What a model does it capture what *generally* going on, not every specific. Enough specifics agree and the model will be strong. Enough disagree and it will far apart. If the exceptions were commonplace, there would be no correlation of WP over time with player movement and changing teammates. There would also be no correlation between WP and +/- but that’s not the case. There *is* a correlation, albeit not perfect. It appears to me that overall it appears that the +/- and WP methods agree more often than they do not.

In general, I suspect that reliance on WP/WS as a method of player evaluation for purposes of assembling a team would perform well since the Troy Murphys are not a majority of the league (thankfully). It is my suspicion that using only WP and WP48 (the former in relationship to the latter to separate out small sample size effects) a GM could probably obtain a *very* good team ignoring all other measures. Would the same be true of a +/- method? (Careful not to lump all of them together here, but for the sake of the statement, I’m lumping them together.) If +/- is less consistent over time, it’s less useful as a tool for evaluating personnel decisions at the level of acquisitions and compensation with limited resources, though it still may be quite useful for allocating resources and maximizing returns with available personnel. If it *is* consistent over time, then it’s a reasonable model. That’s something that is empirical though and data should exist.

The box score doesn’t capture everything, but to hear detractors of ‘box score methods’ you’d think that reading a box score tells you nothing, that the vast majority of the game is a result of the intangibles and synergistic effects. Since *most* of the players that WP rates highly are ones that just about everyone agrees are very, very good players, I think this is a rather weak argument designed to ignore some data that an individual doesn’t like. Baby/bathwater, forest/ trees. [I also doubt that most of these people completely ignore the stats and many cite things like scoring average to support a player’s worth but that’s an aside.]

It *is* an interesting question why there appear to be some players –a minority though it seems– for whom WP/WS don’t seem to capture whatever it is that they do or do not do for a team. I suspect again that it largely has to do with defense. My most substantive critique of WP/WS is that the largest defensive component recorded requires additional actions

A defensive rebound accurately records a defensive stop, but the guard who ‘encourages’ the bad shot that a forward rebounds receives no credit for the action. The rebound requires a missed shot and defense does contribute to this, before the rebound is available.

*In general* I think the position adjustment takes care of most of this, since a PAWS/WP shows how much better than an average baseline a player, but it may not always work. The guard isn’t expected to rebound much so there’s not a huge penalty for not getting credit relative to other guards, though there’s no advantage either. Similarly though, there’s not a penalty for a poor defensive guard who allows his man to score at will. He wasn’t going to get many rebounds anyhow so it seems difficult to have a huge penalty.

There are likely statistical ways around the ‘guard-defense’ problem. The lower overall variance in guard performance (whether this is a product of a variation in supply of superior athletes tall enough to play guard relative to center or whether it’s a product of the variation of statistical accumulation at the different positions or, IMHO *both* to some degree) would suggest that the penalties *and* rewards aren’t as great for guard defense variance.

I have a harder time explaining how someone like Murphy could rebound well–rebounds accurately capturing a team stopping another team from scoring–but his team does poorly, though empirically this seems to be the case and has been for a few years. The possibility is that when he plays, he grabs the lion’s share of the available rebounds but if he didn’t grab it, it was because there was no rebound to grab because the other team scored. Particularly porous defense by a good rebounder on a team otherwise devoid of them will not reflect the game outcome. This scenario was quite plausible when he was on the Warriors and there was rarely anyone else who was even a mildly competent rebounder in the game for GS, but seems like it would be less likely to be the case in Indy where he is often in the game with Oneal or Foster, either of whom can actually grab said ball with their hands. There appears to be some anecdotal support for this scenario as Murphy’s WP with the Warriors appears to have been much better than his WP in Indy. My gut says that he still lets people score at will, but now when someone else manages to force a bad shot, he isn’t guaranteed to be the guy on the team grabbing the board, thus his score suffers though he is the same player producing the same net effect on the team. The diminishing returns on another rebounder show up faster when that additional rebounder is also a lousy defender who doesn’t generate rebound opportunities on his own.

But again, this is anecdote. It is indeed possible for WS/WP to miss these cases. But as a model, if WS/WP are consistent over time and do not seem to covary significantly with player movement, etc, then these issues are probably not overly influencing things. And the anecdotal exceptions are places to start looking for *improvement* in the model rather than starting a wholesale rejection (or character assassination on supporters/detractors). Were there more exceptions, it might call for rejection, but this doesn’t appear to be the case. I don’t want to dig out my copy of Kuhn here, but he’s said something on the subject of models that applies here.