NHL power rankings: you love ‘em, I love ‘em, everyone loves ‘em. Give human beings a bunch of the same type of thing and we will try to put them in order from best to worst. Don’t tell anyone, but really, a power ranking is just a cool name for a list.
This season, Raw Charge is introducing a weekly column where we’ll provide a power ranking of all 31 teams. We won’t write about all 31 teams because frankly, no one wants to read an article that long. But we will hit some highlights and share the new order.
By now, you’re probably wondering how we’re going to rank the teams. And the answer is: with a model! How else, ya dummy? You think I’m going to watch every team? You think I’m going to voluntarily watch the Ottawa Senators or the Detroit Red Wings or the Anaheim Ducks enough to know where to rank them? No. I’m absolutely not going to do that. What I’m going to do is build a model that ranks them and then write about what it says.
Specifically, I’m going to use a Bradley-Terry model. A couple weeks ago, I was using this type of model for some volleyball work and I realized that this might be fun to do for hockey. To quote myself from the article I wrote on the topic:
These models are particularly well suited to sports because if we feed them a list of paired results, they’ll generate a list of estimated abilities for each of the items in the pairs. More specifically, if we feed them a list of scores in volleyball games, the model will use those outcomes to generate ability estimates for each team.
A Bradley-Terry model is especially useful in a situation like NCAA sports where the quality of teams varies widely and thus, quality of competition is extremely important. It makes a little less sense in the NHL where over the course of the season, quality of competition will level enough among the 31 teams that comparing them using a statistic like shot share or expected goal share is reasonable.
But even so, using a model to produce a ranking based on head to head results is a fun idea and I want to do it so I’m going to do it and that’s that. We’re going to make two versions of the model: one that uses goal share and one that uses expected goal share. This will allow us to look at which teams the model thinks are best based on goal results and which are best based on shot metrics. The only information we’re giving the model about each game in addition to goals or expected goals is which team is home and which is away. All data is via Corsica.
To help make the results more interpretable, I’ve rescaled the model’s ability estimates to a 0-100 range with 100 being the best and 0 being the worst.
To start, for smell test purposes, here’s a look at what this model thought of NHL teams at the end of last season. The blue dot is the ability estimate based on expected goals and the orange dot is the ability estimate based on goals. For example, the Lightning ranked 8th best in terms of xG but 1st in terms of actual goals, which sounds about right.
As a whole, these results seem reasonable and not so different from if we just looked at expected goal share and goal share at the end of the season. That’s to be expected because of the leveling of quality of competition over the course of a season and the smoothing of the distribution of individual game outcomes.
But early in the year, teams have played much different schedules with much different outcomes in each game so we can see some interesting results. Without further ado, here’s the first edition of the power rankings for 2019-2020.
We have to start with a caveat that the NHL had issues through the first couple weeks of the season with their shot location data, which will affect the expected goals calculation. I think this use case is still acceptable even with the broken data but we might see some changes after the NHL fixes the locations.
The early seasons results are as wild as we’d expect. Teams have huge gaps between their xG and goal results through their first six or seven games. Carolina is off to a hot start. The Flyers are playing great but not getting results to show for it. Boston is the opposite. The Rangers look like a team still mid-tank. And Winnipeg...yeah...not having a blue line seems to be going about as well everyone thought it would
The team we cover at this blog sits right in the middle of the pack in terms of xG but they’re buoyed by better goal results than they’ve deserved. Through their first handful of games, they’ve been a slightly worse version of the team from last year.
Some teams are getting outrageous gaps in their results. The Kings look great by xG but are 2-5 with the third worst goal differential in the league. That’s impressive. Dallas is also in the group of playing well but losing despite that.
The other team Lightning fans should be keeping an eye on all season is the Vancover Canucks. If they make the playoffs, the Lightning will get the Canucks’ first round pick in 2020. If Vancouver misses the playoffs, the Lightning get an unprotected pick in 2021, which could be a top ten pick if the Canucks continue to be as bad as they’ve been over the last few years. So far, they look like an outer edge bubble team and maybe worse than that. We don’t want to get ahead of ourselves but if they miss the playoffs this year, rooting for them to lose next season could be lots of fun.
Starting next week, we’ll look at how the rankings change over time both in terms of ordering and in terms of how the gaps starts to close between xG and goals for most teams.