FanPost

Understanding PDO: All Teams are NOT Created Equal

PDO-CHART.0.png

Wait...that's not the right PDO. Guess I got my projects mixed up again. Oh well, might as well take this chance to warn you about the math coming your way. If you don't want to deal with it, skip to the "So what does it mean?" section.

In a shortened 2013 season, the Chicago Blackhawks started the season at a blistering and record-setting pace. By riding both high possession numbers and an insanely high PDO, they were able to go the entire first half of the season without ever losing in regulation. Luckily, that gave them a huge cushion for making the playoffs, because after their first regulation loss, their PDO plummeted back towards the mean.....

....and they continued to win games convincingly, cruising through the regular season before winning their second Stanley Cup in three years.

That's odd. Isn't regression to the mean supposed to result in teams freefalling through the standings as stats nerds laugh and say "I told you so" in schadenfreude? That's not entirely inaccurate - just ask our friends over at Pension Plan Puppets - but for some reason that didn't hold true for the Chicago Blackhawks. The reason can be summed up in this graph:

ej6ecx.0.jpg

This can be a very intimidating and difficult graph to interpret, but I would argue it's the one graph that's missing the most when it comes to understanding how PDO works. It does not treat all teams equal. To start, let's walk through basics of why we care about PDO, doing some quick analysis along the way.

How do we analyze PDO?

Two values were necessary for each team to study PDO. The first was their 5-on-5 PDO. The second was the percentage of possible points that team earned. For example, the 2007-2008 Detroit Red Wings were the top possession team of the league that year, they had a PDO of 99.8, and a point percentage of .701, which means they got 70.1% of the possible 164 points it is possible to get in a season, which is 115.

Then, I created multiple categories of data. Each category consisted of 5 adjacent ranks, so the first one was ranks 1 through 5, the second 2 through 6, and so on until I got all the way to the lowest category of 26 through 30. That's a total of 26 categories. For each category, I made a plot that basically looks like this:

2cwlyy1.0.jpg

This is a scatterplot relating point percentage to PDO. The two are related with a best-fit linear regression, which is the line going through all of the data points that describes them the best. This line has two important values. The first is its slope; for this example, it is 0.0436. That tells you how much point percentage changes given a change in PDO. The higher it is, the greater the change. The second value is the correlation coefficient, or r^2 value. For this example, it is .3429. The correlation coefficient will always be between 0 and 1, and the higher it is, the better that best-fit line matches the data. A value of .3429 is high enough to be significant, which is why we accept that PDO correlates well to point percentage. In other words, this chart confirms to us that PDO matters.

So why do these two values matter? Each one can, in essence, tell us how much we should care about PDO as it relates to point percentage. If the best-fit line has a low slope, then we don't care because it means that even a large change in PDO doesn't lead to much change in point percentage. We also don't care about low correlation coefficients. If the r^2 value is low, that means the line doesn't fit our data well, so there isn't even a significant correlation between PDO and point percentage, meaning we don't care about how you PDO changes, because it won't impact point percentage at all. The important thing to remember is that these are relative values, so what's too high or too low depends on context.

So, if we see a plot with a low slope and a low correlation coefficient, PDO and point percentage aren't related, and even if they were it wouldn't be a meaningful relationship. If that's the case, teams shouldn't bother worrying about their PDO. If both the slope and the correlation coefficient are high, that means that PDO and point percentage are very closely correlated, and that a small change in PDO can lead to a large change in point percentage. That would mean teams should care about PDO a lot. When we plot point percentage and PDO, we get a moderate slope and a high correlation, which means we care about PDO. So far, nothing is necessarily counterintuitive or hard to understand. Now we can get on to the plot I showed you earlier.

Time for some fun!

The first thing was the gathering of data. Lots of it. Teams were ranked in terms of possession for every full season since 2007, using 5-on-5 Fenwick For % as the proxy. The top ranked teams each season got a 1, the worst 30. Then every team was compiled into a 180-team list, with the six #1 ranked teams at the top and the six #30 ranked teams at the bottom. Then 26 categories of 5 ranks each, with the first category being ranks 1 through 5, the second one 2 through 6, and so on until the final category of ranks 26 through 30.

So, for each category of 5 ranks I made earlier, I made a plot like the one above, correlating point percentage and PDO. That means each plot had a sample size of 30 teams (five ranks with six teams each), which is basically a full season's worth of data. This is important, because dealing with small sample size is dangerous. Luckily, 30 teams is enough to draw basic conclusions, even though for specifics the ideal number is 100 teams or so. Don't worry, every conclusion I make will be about basics, not specifics.

For each plot, I recorded the slope and correlation coefficient of the best-fit line. In total, I had 26 plots of point percentage and PDO, and with each plot a slope and correlation strength. I had data for good possession teams only (like the category for ranks 1 through 5), bad possession teams only (like for ranks 26 through 30), and everywhere in between. Using these plots, we can now see how influential PDO is based on possession. That's how we get this plot:

ej6ecx.0.jpg

Each of the 26 categories has a spot on the horizontal axis, according to their average rank. The top category (ranks 1 through 5) appears farthest left, at the value of 3. Each category from there drops one in average rank until you get to 28, which is the lowest category. The vertical axis shows both the correlation (in red) and the slope (in blue) of the best-fit linear regression. You may have noticed that the slope is multiplied by 10. This is for no other reason than so it shows up on the same plot as correlation and you can see trends in both simultaneously. There is no trickery or data manipulation here.

So what does it mean?

Remember how we don't care about low correlations and low slopes, but care a lot about high slopes and high correlations? Good. Keep that in mind.

So what are the important features of this graph? Well, it turns out that the plot of all teams' point percentage and PDO is actually misleading; it manages to both make PDO look more and less important than it is. The plot of all teams has a slope of 0.0436 and a correlation coefficient of .3429. For most teams, both of those numbers are low, so we should care more about PDO than we already do. But for some teams, those numbers are actually high.

For most teams, the correlation is about .7 and the slope hovers around the 0.05 region. To put that in perspective, the difference between a PDO of 100 and 101 for an 18th place possession team could be 10 points. That's likely the difference between golfing and practicing in late April.

What's potentially even more significant is that teams that are terrible at possession, like the 2013 Toronto Maple Leafs or the 2013-2014 Colorado Avalanche, have a slope as high as .071. Once again for perspective, the 13-14 Avalanche won the Central division with a PDO of 101.8 and 112 points. If they had a PDO of 100 instead, they probably would have been a bubble playoff team with between 90 and 95 points. Now we can see why almost every historically fraudulent PDO-riding team has been in the basement in terms of possession; because being a better possession team means their good luck tends not to help them as much.

What about elite possession teams? The correlation coefficient plummets after the 5-7 ranks, as does the slope. That means, relative to most teams, PDO doesn't matter for the elite possession teams. This is huge. The important thing to remember is that this is using categories of rank size 5, meaning the only way to ensure that PDO won't matter as much is to be a top-5 possession team, even though we see the drop around rank 7. By historical data, the Canadiens only need to see an increase of about 3.5% FF% for 5-on-5 hockey to get into that group, which would make them much less vulnerable to bad luck. Even though that gap is larger than it sounds, that's something we would all like to see.

I think this is important to note because it's something we intuitively understand, but only when it's convenient. I think a big reason Habs fans want Therrien gone is this understanding that we won't need to bank on Price or luck near as much if a better possession coach takes over, even if the difference is marginal. There is a lot of truth to that, and possible more than we realize. This can also explain why almost nobody has ever cared about the Chicago Blackhawks' PDO. The interesting thing is that a lot was made of the Los Angeles Kings traditionally low PDO, even though their lack of success implied by possession is more likely to be from other factors.

Are there any problems with my method for making these determinations? Well, I don't think so. You could argue that I should use the top-30 possession teams instead of using season-based rankings (for example: the San Jose Sharks had a FF% of 55.1 in 2007-2008, good enough for fourth place that season. In 2011-2012, that would have gotten them first). However, possession, just like PDO, is zero-sum. That means that one team can't have the puck a lot without the other one hardly having the puck at all. Therefore, what really matters is how you do relative to your peers and competition. It doesn't matter that the 2007-2008 Sharks had the puck more than the 2011-2012 Red Wings, because those two teams never played each other, and their possession is independent. The 2007-2008 Red Wings, on the other hand, did play the 2007-2008 Sharks, so their puck possession is dependent, or influenced by one another. To have a scale that includes both independent and dependent teams' stats into account would potentially skew the data. By sorting team based on season rank, only the dependent aspect matters, which prevents skewing.

Another potential knock on the chart is that each data point has a sample size of 30, which can be underwhelming. Once again, I don't think that is a strong argument. Using a sample size much bigger actually dilutes the effect that good possession has, just like the overall plot for point percentage and PDO did. A trend for only top-5 teams wouldn't have shown up as strongly if I had used categories of size 10.

Okay, but couldn't that imply that I was just using smaller category sizes to cherry-pick my data in order to prove my point? Once again, the answer is no, because I did my due diligence. Even if you increase the category size, the trend in the data sticks. In order for the effect of elite possession teams to be diluted out compared to the rest of the league, the category size must be a dozen or higher, which indicates, in my opinion, that the difference in the importance of PDO for elite possession teams is significant. As another order of due diligence, I did the data analysis and drew my original conclusions in the middle of October, but made sure that I went through tons of checks for error or misinterpretation.

Could there be more to it?

The interaction of PDO and possession doesn't necessarily stop there. For example, if we make a plot with the horizontal axis being puck possession and the vertical being PDO, what do we see?

alndl0.0.jpg

The first thing we see is that the two aren't related. A line wouldn't fit that data very well, meaning the correlation coefficient would be low, and even if it were significant, the slope would be basically zero, meaning we don't care anyway. It's possible that one could try to make the argument that good possession teams have stronger regression of PDO, given that the range of PDO drops at about 53% possession. That's worth investigating (In fact, I know someone who is), but to convince somebody that this is the case right now would be a tougher sell than the Phoenix Coyotes.

So what's the big takeaway?

It is that great possession teams shouldn't (and perhaps don't) care about PDO, because it doesn't really influence their final position in the standings. Teams that are below the top 5-ish in possession however, are much more at the whims of PDO changes than they may realize. Meanwhile, it seems almost certain that a basement possession team will be able to ride a high PDO streak and ultimately use that high-sloped dependence on PDO to lose out on the Connor McSweepstakes.

If nothing else, remember this: In my (very strong) opinion, to draw conclusions about a team's PDO without also looking at their possession to see how that may play a role is the equivalent of using only plus/minus to determine a player's defensive ability.

Fanpost content is created by members of the community. Views and opinions presented do not necessarily reflect those of Eyes on the Prize's authors, editors, or managers.