Understanding Jazz

Posted on January 28, 2011 by

7


The Utah Jazz haven’t won a game since January 14th, a streak of six consecutive games.   Prior to this streak, the Jazz were 27-13 and appeared to rank amount the top teams in the West. But after losses to Washington, New Jersey, and Philadelphia – below average teams in the East – fans of the Jazz wonder what happened to this team (and I know this, because I read about the Jazz each morning at breakfast).

The subject of what has happened recently certainly is interesting.  But before we try to understand the past few weeks, it is a good idea to gain some perspective by discussing the entire season to date. After 46 games, the Jazz have a record of 27-19.  When we turn to efficiency differential (offensive efficiency minus defensive efficiency), we see a team with a 0.4 mark.  This is consistent with a team that would win between 23 and 24 games across a 46 game schedule; or about 42 games across an entire season. In other words, the Jazz have won a few more games thus far than their differential suggests.

When we move from efficiency differential to Wins Produced – detailed in the following table – we can see who is responsible for these wins.

Deron Williams, Paul Millsap, Andrei Kirilenko, and Al Jefferson have combined to produce 22.3 wins thus far this season. Of the remaining nine players on the roster, no one has produced more than 1.3 wins.  So this team is the quartet of Williams-Millsap-Kirilenko-Jefferson, and  then, not much else.

The above table doesn’t just report what has happened in 2010-11.  It also reports what we should have expected had the veterans on this team maintained the per-minute performance we saw last year (or in the case of Raja Bell and Francisco Elson, what we saw in 2008-09).  As reported, the Jazz – given what we observed in the recent past – should have expected this collection of players to win 26 of their first 46 contests. Of these 26 expected wins, 23.0 can be tied to the play of Williams, Millsap, Kirilenko, and Jefferson.

Yes, the same quartet that is leading the Jazz this year was expected to lead the Jazz before the season started.  And the remaining players… well, the Jazz shouldn’t have expected much from them at all (which is what the Jazz have gotten).  All of this means the Jazz in 2010-11 are not really a puzzle.  This is a team that we should have expected to be a bit above average.  And this is what they have done.

Of course, the Jazz have not done well recently.  Surely this means something.

Well, maybe not.  Given what the Jazz players did last year, we could have expected these players – if performance didn’t change – to win between 40 and 50 games this season.  And that means, we should have expected the Jazz to lose between 32 and 42 games.  These losses, though, were not going to appear in a consistent fashion.  Sometimes average teams – or slightly above average teams – win a game, lose a game, and win a game.  But it is also possible for wins and losses to occur in streaks. When these streaks happen, we should be careful before we jump to conclusions.  Yes, the streak could mean something.  More often than not, though, one suspects these streaks are just part of how a team performs across an entire season.

Again, I noted the argument of Dean Oliver in Basketball on Paper.  Dean noted that a team that only wins 30% of their games has a 90% chance of winning three in a row at some point in an NBA season.  So streaks happen.  And when they happen, it doesn’t mean the team that is streaking has necessarily changed.

Despite this argument, people might want to look at player performance across the past few weeks and see if there is someone we can blame for this recent decline.  About three weeks ago I offered a post on the Jazz. At that time, the Jazz were 23-11.  Since this time, the Jazz have been 4-8.  When we look at the individual players, we can identify the players responsible for this outcome.

The following table reports the Wins Produced for these players across the past 12 games.

Given the minutes allocated, and the performance across the first 34 games, the Jazz should have expected to win about six games.  But the team only has produced about three wins, and one can easily see that the player who has declined the most is Paul Millsap.  Yes, Millsap is primarily to ‘blame’ for this recent decline.

However, Millsap is not the only problem.  Deron Williams should have produced about 0.8 additional wins across the past 12 games.  So we can ‘blame’ Williams as well.

Or can we?  Although we have numbers that say these players have declined, these numbers may not mean as much as we might think.

Consider the following about performance in the NBA:

  • From season-to-season, a player’s ADJ P48 has a 0.83 correlation.  So from season-to-season, players are quite consistent (at least according to ADJ P48… per-minute Win Shares only has a 0.67 correlation and Adjusted plus-minus only has a 0.26 correlation).
  • Performance across a season, though, is not constant.  Players tend to have good games and bad games.

For example, last night LeBron James had an effective field goal percentage of 29% against the Knicks.  And this is because the Knicks know how to play defense against King James.  And back on December 17 – when LeBron had an effective field goal percentage of 67% against the same Knicks – that is because the Knicks forgot how to play defense against LeBron.

Okay, I made up those explanations.  Again, performance from game-to-game, or even week-to-week, is not a constant.  Over time – as the sample size gets bigger and bigger – a better picture of a player’s productivity appears.  But over small samples, this picture is quite difficult to see.

Given the nature of small samples, we should be very hesitant to read anything in to what we see across a few games (or one game or a few minutes within a game).

So the last few weeks of Jazz basketball might mean that Millsap has forgotten how to be a productive NBA player and Williams has permanently declined from what have seen in the past.  Or it might just be part of the normal fluctuations in player performance across a long season.  The latter story seems more plausible, although probably not satisfying to those who want an explanation for every fluctuation we see in the data.

– DJ

Advertisement