Who is the best team in the NBA? One could look at the standings, but this is too simple and frankly doesn’t offer much room for discussion.
Various writers – such as Steve Kerr , Tony Mejia, and Marc Stein – offer team rankings that argue that various teams are better or worse than their record might indicate. These rankings do inspire discussion, but these are not exactly objective.
Hollinger’s Power Rankings
Hence the need for the latest from John Hollinger. Hollinger has created a power ranking based entirely on objective data. Hollinger’s “secret” formula (does ESPN know what the word “secret” means?) indicates that his ranking relies upon the following factors:
- Team’s average scoring margin (MARG)
- Team’s average scoring margin over the last 10 games (MARGL10)
- Strength of schedule (SOS)
- Strength of schedule over the last 10 games (SOSL10)
- Whether or not a team has played its games predominantly at home or on the road, over the course of the season and recently (HOME, HOME10, ROAD, ROAD10)
The specific “secret” formula is as follows:
RATING = (((SOS-0.5)/0.037)*0.67) + (((SOSL10-0.5)/0.037)*0.33) + 100 + (0.67*(MARG+(((ROAD-HOME)*3.5)/(GAMES))) + (0.33*(MARGL10+(((ROAD10-HOME10)*3.5)/(10)))))
In sum, Hollinger considers a team’s offensive and defensive ability, the quality of the team’s opponents, where it has played, and how well it has performed recently.
Let me start by saying that I generally like this idea. First and foremost, it’s objective. But more importantly, Hollinger’s ranking is based primarily on a team’s offensive and defensive ability. Why is this last point important? Let’s turn to the words of Hollinger:
One of my goals was to create a system that told us more about a team’s quality than the standings do.
So instead of winning percentage, the rankings uses points scored and points allowed, which are actually better indicators of a team’s quality than wins and losses.
This might not sound right at first, but studies have shown scoring margin to be a better predictor of future success than a team’s win-loss record. Thus, scoring margin is a more accurate sign of a team’s quality.
That explains why, for instance, Phoenix is No. 1 right now even though Dallas has a better record — the Suns have the best scoring margin in basketball.
Conversely, it explains why Miami is No. 24 even though the Heat are close to .500.
Okay, so I like the ranking. Of course, I still have a small quibble.
A couple of months ago I created a small uproar by noting that Hollinger’s Player Efficiency Rating (PER) has a few problems. One issue I raised was that it was not entirely clear what Hollinger’s PER was seeking to measure. A similar quibble could be offered with respect to Hollinger’s power rankings. He offers explanations for his weights, but often his explanations seem to boil down to a “this makes sense to me” defense, as opposed to a “these weights allow us to explain or predict something (like final standings or playoff outcomes)” defense. I would emphasize that my question concerning weights is a very minor quibble. I have no doubt that in evaluating a team it is reasonable to consider the elements Hollinger includes. It’s just not clear to me why each factor is weighted as he suggests. Still, I very much prefer Hollinger’s rankings to a power ranking based on a writer’s subjective impressions of each team.
Ranking Offensive Efficiency, Defensive Efficiency, and Projected Wins
Okay, all that being said, I thought it might be useful to offer an update of a ranking I posted the day after Christmas. At that time I offered a ranking of the best offenses, best defenses, and best teams. I would not argue that these rankings are better or worse than what Hollinger is offering. I am merely trying to show what we see when we only focus on the quality of each team’s offense and defense.
The Best Offenses
Table One: The Offensive Efficiency Ranking
Offensive efficiency is defined as how many points a team scores per possession. For example, the Denver Nuggets score 105.4 points per game, the 4th best mark in the league. Denver, though, plays at the fastest tempo in the league. Hence, when we look at offensive efficiency we find that Denver ranks 13th, or closer to the middle of the pack. In contrast, the Detroit Pistons rank 18th in points scored per game, but are 7th in offensive efficiency. From this we would conclude that the Pistons are actually a better offensive team than the Nuggets.
Although the Pistons are above average, Detroit is not the best. The two best teams, and this is true whether we look at points scored per game or the offensive efficiency, are the Phoenix Suns and Washington Wizards.
The Best Defenses
Table Two: The Defensive Efficiency Ranking
When we look at points surrendered per game we see that the Nuggets are ranked 27th in the league. This ranking, though, is driven by the tempo the team plays. Defensive efficiency ranks the Nuggets 8th. Yes, Denver is relatively better at defense (a ranking driven by the team’s most productive player, Marcus Camby).
The top two defensive teams are San Antonio and Chicago. Phoenix, the top offensive team, is ranked 12th defensively. The Wizards, though, are ranked 29th on defense.
The Best Teams
”Best” is being defined here strictly in terms of offensive and defensive efficiency. That is not to say that strength of schedule is not important. As noted, though, I am only looking at how the teams rank if we only consider offensive and defensive ability. So if a team has played a relatively easy (or hard) schedule so far, then the ranking over-states (or under-states) the team’s prospects.
When we consider both offensive and defensive efficiency, the team at the top of the rankings is San Antonio. The Spurs rank 1st in defense and 5th in offense. The next two teams – the Suns and Dallas Mavericks – are quite close to the Spurs. These three teams are all projected to win sixty or more games, so these franchises are the very best in the NBA.
After these three we see a couple of teams that are very good. Both the Chicago Bulls and Houston Rockets project to win about 54 games. This makes the Bulls the best team in the East, which this year isn’t saying much.
Once we get past the top five, then we see quite a drop-off. The Utah Jazz currently has a record of 27-14, which translates into 54 wins over the course of the season. The team’s Efficiency Differential – or the difference between offensive and defensive efficiency per 100 possessions – is only 2.64. This is about half of the differential we see for Chicago or Houston. Consequently I would conclude that Utah is not quite as good as Chicago or Houston (at least, thus far this season).
Connecting to the Players
The concepts of offensive and defensive efficiency come from the writings of Dean Oliver and John Hollinger. From The Wages of Wins we see how to go from a team’s offensive and defensive efficiency to an evaluation of the individual players on the team.
Basically a regression of wins on the two efficiency metrics allows us to ascertain the relative value – in terms of wins – of points, field goal attempts, free throw attempts, rebounds, steals, and turnovers. A few more regressions allow us to determine a value for personal fouls, blocked shots, and assists. Once we have these values, with a bit of work we can determine the Wins Produced for each player. And the calculation of Wins Produced allows us to identify which players are responsible (or not) for the projected wins we see for each team.
The Seattle SuperSonics in 2006-07
To illustrated, let’s consider the Seattle SuperSonics this season.
Seattle has played exactly half their season and currently has a record of 16-25. When we look at offensive and defensive efficiency, we see this record is well deserved. Although the Sonics rank 9th in offensive efficiency, it’s only 25th in defense. Seattle’s efficiency differential stands at -1.94, which translates into a projected winning percentage of 0.439.
Which players are responsible for this outcome? The following table reports the Wins Produced for Seattle’s players after 41 games.
Table Four: The Seattle SuperSonics in 2006-07
Last summer the Sonics signed Chris Wilcox to a 3 year – $24 million contract. This was similar to the contract Cleveland offered Drew Gooden. As I noted last summer, though, Gooden has historically been a much more productive player than Wilcox.
This season Wilcox is posting a Wins Produced per 48 minutes of 0.111 (average WP48 is 0.100). This is an improvement over his career WP48 entering the season (0.068), but not quite what Seattle saw last season in 29 games (WP48 of 0.229). Given that Gooden is giving the Cavaliers a WP48 of 0.222 this season, we can conclude that so far either Seattle is paying Wilcox too much or Gooden is getting too little.
Although Wilcox might be under-performing his contract, he is thus far the only above average performer Seattle employs at power forward and center. In other words, as has been the case since the 1992-93 season, this team still has a problem finding consistent production in the middle.
If you are looking for production on this team you have to look at Rashard Lewis and Ray Allen. Fifty percent of the team’s wins come from these two players. Unfortunately, once you get past Allen, Lewis, and Wilcox, no other player who appears on a regular basis is above average.
Given the depth in the Western Conference it seems likely that the Sonics will be back in the lottery in 2007. And given the lack of productivity in the front court, it seems likely that this team will once again target a big man in the 2007 draft.
Teams to Analyze
There are now only seven teams I have yet to analyze: Atlanta, Charlotte, Denver, Miami, Milwaukee, New Orleans–Oklahoma City, and Philadelphia. At the time of the Iverson trade I offered a comment on both Philadelphia and Denver, so I think we should wait a few more weeks to check back with these teams. I am open to posting on any of the other five teams. So if anyone has a preference, let me know.
Also, like the Sonics, teams are approaching the midpoint of the season. So far 13 teams have played 41 games. As each team hits this mark I am downloading their data from NBA.com. When all teams have reached 41 games – and I find time to do the analysis – I will start posting on who the best (and worst) players, rookies, teams, etc… are at the midpoint. Given my schedule, this analysis should be posted by November (of 2009).
– DJ
Evan
January 22, 2007
[b]Still, I very much prefer Hollinger’s rankings to a power ranking based on a writer’s subjective impressions of each team. [/b]
It seems like you should re-write this sentence to read:
Still, I very much prefer Hollinger’s [b]subjective formula[/b] to a power ranking based on a writer’s subjective impressions of each team.
I like Hollinger’s formula better than the others, but he’s just replaced capricious intuition with his own intuitive formula
dberri
January 22, 2007
Evan,
Yes, Hollinger’s formula is subjective. Which is the point I was trying to gently make.
I wonder what one would find if you tried to predict playoff performance as a function of offensive efficiency, defensive efficiency, and schedule strength. This might be a way to get at the weights Hollinger uses objectively.
Evan
January 22, 2007
I know, I was just laughing at how gently you were making your point.
thwilson
January 22, 2007
Hi Dave,
I think you are making an error in the way that you calculate projected wins. You should only be projecting for the half of the season remaining, not for an entire 82 games. This is most glaring in the cases of the Bulls and Lakers.
To achieve your projection the Bulls would have to go 31-10 for the remainder of the season, a winning percentage over 75, which I don’t think you mean to predict.
Similarly, to reach your prediction the Lakers would go 19-22 here on out. Even those most down on their prospects would likely find this projection unduly gloomy.
I think you would get more accurate projections, and probably the projections you are looking to get now, but applying the Pythagorean projection (assuming this is your method) to just the remaining games and adding that result to the current number of wins.
Best Wishes,
T. H. Wilson
dberri
January 22, 2007
T.H.
Not sure I am really trying to forecast the rest of the season. The analysis is based on who has played so far, and due to injuries, trade, the whims of coaches, that is all going to change.
What I am saying is given how this team has performed, if this were true over the course of an 82 game season you would see X number of wins. And therefore, we can see that the Lakers are doing a bit better than their efficiency measures suggest. And the Bulls are doing a bit worse.
JB
January 22, 2007
I am with TH Wilson (and Okapi in a previous thread) on this issue that ending with a column of “projected wins” gives the impression to many that you are projecting teams will end with that number of wins. If you want to just discuss underlying team strength it might be better to stop
short of “projected wins” or call it something else.
Applying it just to remaining games might get closer to actual, but it seems likely to overstate still.
JB
January 22, 2007
I think there have been studies of the average point differential for different W-L levels. It might be possible to say that an in-season point differential or expected win % will tend toward that average W-L but teams vary and so there will a distribution round that level. And of course team strength (producing the strength measures) will vary over a course of a season.
JB
January 22, 2007
I should have worked on that last post longer. Let me rework it a little for a bit more clarity (though this topic deserves more words and more time to handle fully)
I think there are studies of the goodness of actual fit for various versions of converting point differential into expected win% for recent NBA seasons. And probably more ought to done comparing the formulas in public: Sagarin, Hollinger’s, knickblogger’s Otter (based on Mod Col method) and any other worthy contenders.
It might be possible to say that an in-season point differential will tend toward an average W-L but teams vary and so there will a distribution round that level.
. The longer the amount of season in the bag, it would seem the more likely it’s assessment would be significant and predictive. But of course team strength delivered on the court (producing the strength measures) will vary over a course of a season and so change of trajectories are to be expected.
JB
January 22, 2007
“So instead of winning percentage, the rankings uses points scored and points allowed, which are actually better indicators of a team’s quality than wins and losses.”
Would a even more complicated strength formula that used both actual win % to date and scoring differential (and whatever subparts of each) do better than based it off just scoring differential stuff? Seems like it would.
Okapi
January 22, 2007
JB,
Bill James of Sabermetrics fame created an empirically based formula to translate run differentials into winning %. The formula was:
winning % = [runs scored]^2 / ( [runs scored]^2 + [runs against]^2 )
Apparently others came along and replicated this analysis for basketball, coming up this:
winning % = [points scored]^14 / ( [points scored]^14+ [points against]^14 )
If halfway through baseball season you could predict a team’s 2nd half record using either (1) year-to-date win-loss % or (2) projected win-loss % from run differentials, you would generally be better off using the projecte win-loss % from run differentials. In other words, a team’s ability to win close gams in the first half of the season doesn’t persist into the second half of the season.
You suggest using actual win % to date as well as scoring differential, but I’d guess that in a model comprised of both win % to date would drop out.
Okapi
January 22, 2007
Just to elaborate a bit, I think JB brings up a good point when he suggests still looking at actual win %, even if I think it would drop out of a model when you look at historical results.
I’m reminded of a Wall Street Journal story on a system for projecting baseball batting average. IIRC, ProTrade was producing it. The system looked at where balls were hit and generated a pro forma batting average. It was an interesting approach because a batter could go through a period of bloop singles that propped up his average. But if you had to project his average in the next period you would be more accurate by stripping out this noise. Hitters that underperformed their projected batting average were called “unlucky.”
However, the problem was that the system kept saying Barry Bonds was “unlucky” and Ichiro Suzuki was “lucky.” Barry Bonds was consistently “unlucky” because defense shifted when he was at bar. Suzuki was consistently “lucky” because his speed enabled him to beat out throws. The model should have been designed to control for persistence in “luck.” In statistical terms, the errors of model prediction might have been serially correlated. A term built into the model could have controlled for this.
JB
January 22, 2007
There was some relevant discussion on moving from in season win% to final season win % on the two pages of this thread. Regression to the mean is the stronger statement/reason for the feeling or half-memory why I said I thought applying a power ranking to the remaining games might overstate the actual. Skepticism of the greater predictive power of recent games is another point made that could discussed further in the review of Hollinger’s power ranking.
tinyurl.com/2qmbsn
thwilson
January 23, 2007
Dave,
Maybe the problem is just with the label. Projected sure sounds like a forecast for this season. Perhaps the word “expected” would be more appropriate…that’s the term that knickerblogger and basketball-reference both use.
Regards,
T. H. Wilson
kk
February 4, 2007
It’s garbage. The Pacers dropped from 10 to 15 in a gamespan in which they won 3 in a row. What?! The system needs tweaking. Wins have to count a lot more. If you replaced the BCS with a similar system, I guarantee you’ll have quite a lot of irate people.
Jake
February 13, 2007
Maybe these rankings provide a more subjective view of the league, but there is no way you can claim the Spurs are better than both the Mavericks and the Suns. Not only are they NINE games behind the Mavs (you can’t tell me that doesn’t count for anything), but they are also 1-2 against them with that lone win coming in the midst of the Mavs’ 0-4 start (they are 43-5 since). If you watch the Spurs you can see that Duncan is not nearly as efficient as he used to be, and in the fourth quarter the best part of their offense (Duncan) is virtually useless (slow in the post, terrible from the line). These Spurs aren’t nearly as good as the teams of recent years.
Also, winning games has to count for something. Think of it this way: if a team were to lose five games in a row by an average of three points, and then win their next game by 20, they have a positive point differential for those games. However, it is absurd to claim that they played above average in those six games since they went 1-5. This formula would be very good if it was adjusted to make winning games count for something (and a substantial amount).