Empirical Investigations into Special Teams

Part I: Does puck possession correlate to penalties called for or against?

The Montreal Canadiens had a lot of powerplays in the shortened season of 2012-2013. They led the league, with 203 opportunities, and were the only team to get over 200, as the next closest was the Detroit Red Wings with 185 powerplay opportunities. So did these teams get lucky? Or do they do something that would cause them to have more penalties called for them than against them?

The best argument to be made for teams being able to affect the number of calls they get all comes down to the fact that, generally speaking, it's hard to get a penalty called against you if you have the puck .Therefore, we would expect teams that possess the puck more to get more powerplays. And, for what it's worth, both the Montreal Canadiens and Detroit Red Wings were top-10 possession teams when playing 5 on 5. So is there something to this theory?

To investigate this, I created a scatterplot, with data pulled from the past 3 years. On the horizontal axis is a team's Fenwick Differential per 60 minutes of 5 on 5 ice time (all of it, not just while the score is close). Basically, teams that possess the puck more often their opponents will have a FD/60 5v5 that is greater than 0, and teams that don't will have a FD/60 5v5 that is less than 0. On the vertical axis is the number of powerplay opportunities that team gets. If the theory that teams that possess the puck more get more calls is correct, then we should see a line, with some random variation, from the lower-left (less possession, less powerplays) to the top-right (more possession, more powerplays).

Note: Because this is over the past 3 years, one season was shortened. I took the number of powerplays a team got in 48 games, and then normalized that to how many they would get over 82 games, so that all data is based on actual events, and 3 82-game schedules.


As you see, there is no such line. If we make a best fit line, we'll see that it does slope upwards, even if it is just a little bit. However, the correlation is extremely weak. The R^2 value, which is a numerical representation of how well the data matches the best-fit line (1 is a perfect match, while 0 means that there is no correlation at all), is only .0018. That means this distribution is basically random, except for the curious trend that the teams seem to separate into 2 groups. The top group is teams that get roughly 225-300 powerplays per season, and the second group is of teams that got about 125-175 powerplays. Why could this be? I honestly have no idea, so if you do, make sure to comment on this. If your theory can be quantified, I'll probably test it out, because this is bothering me.

So, teams that possess the puck don't get more powerplays. But could they get less calls going against them? Well, I made the same graph the last one, only the vertical axis now features the number of calls made against a team, so if there is some truth to this theory, we would expect to see a line that goes from the top-left (less possession, more penalties against) to the bottom-right (more possession, less penalties against).


Turns out that there is pretty much no truth to this understanding either. In fact, this graph looks pretty much the same as the graph for powerplay opportunities. While there is a line with a small slope going in the direction that we expect, the correlation is so weak that to take it seriously would just be incorrect. Interestingly enough, the teams even split into 2 similar groups, one with a few penalties called against and one with more.

So, do these two groups end up evening out so that teams that get more powerplays also kill more penalties and those that get fewer powerplays get fewer penalty kills? Or maybe the teams that get lots of powerplays only have a few penalties called against them, and the result just doesn't depend on possession?

To examine this, I made a third scatterplot. Once again, I kept the same horizontal axis. However, the vertical axis is different. I made (sort of, at least. I'm sure I'm not the first guy to do this) a stat called penalty differential, or PD, and it is the number of powerplays a team gets minus the number of penalties it must kill. So, a negative PD means a team had more penalty kills than powerplays, and a positive PD means a team had more powerplays than penalty kills.


So what do we see? Well, the plot looks more or less random. What that means is that the teams that had more powerplays generally had to kill more penalties as well. While there is a slight positive trend in terms of possession, the correlation is really weak again, so it can't be taken too seriously.

So what does this all mean? Well, puck possession doesn't really affect the number of powerplays, or penalty kills, that a team gets. So, there are two possible conclusions that can be made. The first is that penalties called both for and against a team are low-occurrence events that are distributed independent of any other in-game events, but that even out over time. I can live with this one.

The second conclusion, and the one that I like better, is that a team's penalty differential is not random and doesn't have to even out, but it just doesn't depend on puck possession. I like this one more because I don't like random, especially when it comes to officiating. The only problem is that I haven't been able to think of what else penalty differential might depend on, so if you have an idea, please comment. Once again, if your theory is quantifiable, there is a good chance that I will investigate it numerically and post the results.

Part II: Does a team's penalty differential lead to long-term success?

This is pretty quick and easy to investigate. Success will be defined as points per game in the regular season, thanks to the shortened season (I'm beginning to think the real reason the lockout happened is that Bettman and Fehr agreed there was too much fancystat work going on, so they wanted to just throw a sample-size wrench into everything. Seriously, the 48-game season is really, really annoying.). I will once again use my best friend, the scatter plot. On the horizontal axis will be a team's penalty differential, so teams getting more powerplays will be towards the right and more penalty kills to the left. The vertical axis will be a team's points per game. Data was taken, once again, over past 3 years. If penalty differential relates to long-term success, we should see all the data points line up into a line, with some random variation.


And there is basically no trend. So while there is a slight upward-sloped best-fit line, its correlation is so weak that it can't be taken seriously. So what's the takeaway? Even if a team gets "all the calls" (I'm looking at you, both fanbases of the Habs-Bruins rivalry, and also literally every other fan base of literally every sports team, ever), it doesn't matter in the long run. Individual games, maybe, but not over a season, and probably not even over the playoffs.

Part III: How does special teams performance correlate to long-term success?

This is also pretty straightforward. We know that puck possession correlates strongly to long-term success, and we know that PDO correlates to success as well, though not as much as puck possession does. So where does special teams performance fit on this spectrum?

To answer this, I made 3 scatterplots. On each one, the vertical axis is a team's points per game, which is how I choose to define success. On the horizontal axis is a special teams metric, so we'll expect to see a line in the scatterplot from the bottom-left to the top-right of each plot. The first plot has powerplay percentage on the horizontal axis, the second plot has penalty kill percentage, and I'll get to the third plot soon enough. So let's start by looking at how powerplay percentage and penalty kill percentage correlate to long-term success.



The first thing to notice is that the best-fit lines for both plots are positive, which makes sense, because we expect teams good at scoring on the powerplay or killing penalties to win more games. What's good is that the correlations aren't insignificant either, so the trend is irrefutable. While success on either the powerplay or the penalty kill is nowhere near as predictive as, say puck possession, it's still not insignificant.

So, how can we expand on this? Well, the goal should be to have a single stat that quantifies all special teams performance, and the way to make sure that it works is to see how predictive it is of long-term success. Well, my idea was to just add powerplay and penalty kill percentages together, which I call Special Teams Performance, or STP. STP will be a number around 100, with good special teams performers having an STP above 100, and poor special teams performers having an STP below 100. It is important to note, however, that while values for STP can look, and maybe feel like PDO, they are not randomly distributed because they are based on skill and systems work, and STP does not necessarily regress to the mean. So teams with extreme STP values are not lucky or unlucky, but rather elite or pathetic at special teams. My third scatterplot for this investigation is therefore to have a team's points per game on the vertical axis, and their STP on the horizontal axis, and the best-fit line should be similar to what we saw for the powerplay and penalty kill plots above.


Distracting sidenote: See that team, way to the left all by their lonesome? That would be the 2013-2014 Florida Panthers, and I'm pretty sure they just tried to be historically bad at special teams this year as a claim to fame. They had a penalty kill of 76% and a powerplay of 10%, so their combined STP was worse than a lot of teams' penalty kill. Good job, Florida. Making really bad teams like Buffalo and Toronto feel better about themselves since 1993.

And it turns out that it does look similar. In fact, the correlation between STP and success is even stronger than powerplay and penalty killing performance taken separately. This is good, and the correlation is even a little stronger than PDO, so to sum up how to predict a team's season will go, the predictive factors, listed in order of decreasing predictive value, are puck possession, then special teams, and then PDO.

With that, I wanted to take my analysis a bit further. ExtraSkater has a plethora of statistics available for all situations, so my thought is to make a statistic similar to STP, but use shot generation numbers. The idea would be to add a team's penalty killing and powerplay shot attempt differentials, so that teams that are good on special teams will have a number greater than 0 and teams that are bad will have a number less than 0. This could be more predictive of success because powerplay and penalty killing percentages are still based on relatively low-occurrence events (goals), while shot attempts happen a lot more, which is why possession metrics are so good in the first place.

In order to do that, I made another scatterplot. The vertical axis is still points per game, as long-term success is still what we want to know about. However, instead of using special teams success rates on the horizontal axis, I used Fenwick differentials per 60 minutes. I used Fenwick because I figure that a key part of killing penalties is blocking shots, and a key part of powerplays is getting shots through, so Fenwick (all unblocked shot attempts) will be a better assessment of special teams play than Corsi (all shot attempts). Like I postulated above, I added a team's Fenwick for on the powerplay (a positive number) and their Fenwick against on the penalty kill (a negative number), to get a value that is centered around 0. Good special teams performers will be to the right and bad ones to the left, so if there is a strong correlations between what I call Special Teams Fenwick Differential per 60 Minutes (STFD/60), then we will have a best-fit line from the bottom-left to the top-right, with some random variation, on the scatterplot.


Turns out that we do see a trend. There is definitely a positive correlation between STFD/60 and success, but there's a problem. It's actually weaker than the correlation between STP and success. That means the better measure for special teams, in my opinion, is actually not based on shot generation, but of actually putting the puck in the net, or keeping the puck out of the net. It's interesting that low-probability events are actually a better indicator of success in some cases.


So what did we learn?

1. We learned that puck possession doesn't lead to having more calls made for or against you. There might be more to it, but as far as I can tell it's random.

2. We learned that the number of penalties called for or against a team doesn't really have much of an effect on long-term success.

3. We also learned, however, that special teams performance is a major factor into contributing to success. It's not as predictive as puck possession, but it does matter more than PDO. The best stat I've found is what I call Special Teams Performance, or STP, and it is simply the addition of powerplay and penalty kill percentage. It is a skill-based stat, however, and not subject to regression.

It would be a good idea to let this sink in for a while. I am saying that, based on my data, special teams matter, quite a bit. What's fascinating to me is that special teams performance has never really been addressed by the fancystats community, at least to my knowledge, and I think that the analysis of the game has suffered. I think that statistical analysis of special teams performance needs to be the new focus of people who analyze fancystats (fancy statisticians?).

So that's that. Once again, please comment if you have something to say. If your thing to say is that you think I'm a bumbling fool, or a nerd who doesn't actually have perceptions of hockey based on reality, or if you think I am going about this the right way but got a totally wrong result, I especially want to hear from you. I don't want to sound unappreciative of praise, because I really would appreciate the compliments, but I learn more from criticism, critique, and cynicism.

Fanpost content is created by members of the community. Views and opinions presented do not necessarily reflect those of Eyes on the Prize's authors, editors, or managers.