ShotQuality Model Postseason Evaluation

By Ben Wieland

With another NBA season in the books, it’s time to take a deep breath, collect ourselves, and dive back in to the 2021-22 ShotQuality data. These sorts of postmortems can be revealing, both in terms of what the ShotQuality model got right and where it still has room to improve in terms of game projection.

Most of the data analyzed here will be from the regular season to ensure equal samples for each team unless otherwise noted.

In the regular season, teams that won the ShotQuality score also won the game 68.1% of the time — in other words, the projected outcome matched the actual game outcome slightly more frequently than two in three times. On average, the total error between the projected ShotQuality score and the actual score for each game (calculated by summing the absolute individual errors for each team) was 16.2 points. Teams that won the ShotQuality score won their next game 56.5% of the time; teams that won the actual score won their next game 54.0% of the time.

In the postseason, teams that won the ShotQuality score also won the game 67.8% of the time, with an average total error of 16.4 points per game. The playoffs were especially “swingy” this season from game to game — teams that won the ShotQuality score against another team in a series won their next game against that team just 50% of the time, while team that won the actual score won their next game just 41.7% of the time.

Here’s a quick glimpse at ShotQuality model accuracy by team, in terms of whether or not the model’s predicted outcome (win or loss) matched the actual game outcome. Every single team’s ShotQuality results concurred with their actual results more often than not, and the table seems to suggest that the model most accurately projected the Suns and the Rockets.

However, looking at pure win-loss accuracy is a flawed metric. Variance in the final score from ShotQuality expectations is less likely to actually affect game outcomes for especially good or bad teams.

For example, on December 2, the Grizzlies beat the Thunder 152-79, with a ShotQuality projected score of 117-89. The model got this game’s final point differential wrong by 45 points, the highest error of the season — but since it still got the win-loss column correct, there’s no indicator of how unusual this game was.

This pattern plays out on a large scale: on average, teams with extremely high or low winning percentages had significantly higher win-loss projection accuracy. The graph below demonstrates this trend, with a second-order bivariate regression line (blue) and a local polynomial regression line (red) fit to illustrate the curvilinear relationship.

A better metric uncorrelated to team win percentage is average net error, calculated by summing the model’s absolute error in team and opponent scores for each game. For example, in Game 4 of the NBA Finals, the Warriors beat the Celtics 107 to 97 with a ShotQuality final score of 102 to 100. The net error for this game would be the absolute error of the Warriors score difference, 107 – 102, plus the Celtics score difference, 97 – 100 (you would take the absolute value of this). This results in a net error of 5 + 3 = 8 points.

The leaguewide average for absolute net error was 16.2 points this season, with a standard deviation of about 1.17 points. The Pelicans and Lakers had final scores that most resembled their ShotQuality projections, while the Heat and Timberwolves experienced the most variance from ShotQuality projections in their final scores.

All of these metrics so far can be made more interesting to evaluate by adding directionality. Instead of finding absolute errors, skipping the “absolute” step results in directional (positive or negative) errors. This allows us to get at the heart of the postseason evaluation: which teams were especially lucky or unlucky — or might be executing strategies that the ShotQuality model doesn’t currently account for?

To start, here’s the overall luck table. The luckiest team, the Orlando Magic, outperformed their ShotQuality expectations by an average of 4.5 points per game; the unluckiest team, the Kings, underperformed by an average of 5.2 points per game.

The next table showcases exclusively offensive luck. The Magic’s overperformance seems to be exclusively on the offensive end, and Detroit’s is also mostly attributable to offensive overperformance.

The five unluckiest offensive teams all ranked in the top half of the league in offensive rating, despite underperforming ShotQuality expectations. The model might be systemically overestimating the size of the talent gap between the best and worst offenses, resulting in high-level offenses “underperforming” and the worst offenses “overperforming.” This would explain four of the five luckiest offenses being bad in terms of offensive rating and all five of the unluckiest offenses being good.

Since defensive ratings are for points allowed, not points scored, positive values indicate poor luck (opponents scoring more points than expected) while negative values indicate good luck. Three of the conference finalists grade out as the most overperforming defenses, while the Kings, Pacers, and Trail Blazers finish in a triumvirate of their own atop the list of defensive underperformances.

Another interesting note from these tables: most variance seems to be offensive, not defensive. The magnitude of offensive error is significantly larger than the magnitude of defensive error, on average. This tracks with the common belief that offense is more “random” on a game-to-game level than defense — makes and misses are fickle in a way that defense is not.

Large ShotQuality margins of victory were also significantly more predictive of victory than close games, another encouraging sign for the model. Teams that won the ShotQuality score by more than 5 points won the game 73.6% of the time; by 10, 80.6%; and by 15, a whopping 87.6%. In contrast, “close” ShotQuality scores within 5 points resulted in a win probability of just 55.6% for the higher-scoring ShotQuality team.

Leave a comment

Your email address will not be published. Required fields are marked *