Goal-Kicking Accuracy After Wins and Losses

Lately I've been thinking a lot about the predictability of teams' conversion rates - that is, the proportion of their Scoring Shots that they turn into goals.

Previous analyses on the topic have revealed that:

All of the effects are small in magnitude, and it remains the case that the overwhelming majority of the game-to-game variability in a team's conversion rate appears to be largely a function of random factors (or at least of factors I've no so far explicitly considered).

So, I was a little surprised when I decided to investigate the relationship between teams' conversion rates in successive games, in one case after a win, and in the other after a loss.

In the chart below we have the results for the past 20 home-and-away seasons, including the partially completed 2016. For each season we have two sets of bars, the trio of bars on the left showing the number of games for which a team's conversion rate fell, increased, or stayed the same in the game played immediately after a win, and the trio on the right showing the same data for the game played immediately after a team's loss. We exclude drawn games from the analysis because their inclusion serves only to clutter the chart within altering its story.

And, that story is incredibly clear. Teams tend to convert at higher rates after a loss, and lower rates after a win, and this has been the case for every one of the seasons shown.

Now maybe this is just a recent phenomenon, so let's get more ambitious and produce the chart for the entire history of the sport.

If you click on the chart you'll access a larger version of it, a careful review of which will reveal that in all but about 7 or 8 seasons the same phenomenon holds. Win a game, and then expect to convert at a lower rate next week; lose and do the opposite.

So, what's going on here?

Most likely, it's an interesting example of a statistical phenomenon known as regression to the mean, which is used to describe what happens when random variables are repeatedly drawn from the same bell-shaped distribution and, in one draw, an exceptionally large or small value is selected. As a matter of pure logic, the draw after that will, more often than not, be closer to the mean than further away from it.

If you picture in your mind's eye the Normal distribution and imagine drawing a point somewhere, say, well to the right of the distribution you can see that more of the probability density of the distribution lies nearer the mean (the peak of the "bell") than further away from it, out in the tails.

How that applies to the current situation is as follows. If we know that a team has won a particular game then we also know that it's more likely to have converted at a relatively high rate, as evidenced by the chart below, which tracks the average conversion rates of winning and losing teams in each season from V/AFL history.

So, if a team's conversion rate can reasonably be modelled as a bell-shaped random variable, then it's likely that the rate of a team that won in the previous week will regress towards the mean in the following game. So, teams that won their previous game will tend to see a reduction in their conversion rate in the following week. We can argue similarly for teams that lose to conclude that such teams will be more likely to record a higher conversion rate in the following week. This might well be enough to explain the phenomenon we observe.

However, the structure of the V/AFL draw, which tends to have teams playing at home and then away in successive weeks, might tend to exacerbate the phenomenon given the fact that, as noted above, home teams tend to convert at higher rates than away teams, and, partly as a result, tend to win more often than they lose.

In an attempt to remove this component of the phenomenon I reanalysed the data, this time excluding as a permissible "previous game" any home team wins or away team losses.

Though seasons like 1999, 2011 and 2016 no longer faithfully follow the pattern we observed earlier, there's still a strong tendency in most years for the phenomenon to appear.

Pure regression to the mean seems to be a clear component of what we're seeing.

To investigate just how much it might be contributing, divorced from any considerations of the V/AFL draw, for the final piece of analysis I simulated the scores of 50 seasons each involving two teams of identical ability playing 200 games, using the theoretical team scoring methodology I developed in this and subsequent blogs in 2014.

For the current version of the simulations I assumed that the two teams had identical Scoring Shot and Conversion distributions (ie same means, same shape parameters etc). Given that assumption, any tendency to see lower conversion rates after wins and higher ones after losses can be attributed solely to regression to the mean.

We find evidence of pure regression to the mean for losing teams in 48 of the 50 seasons, and for winning teams in 49 of the 50 seasons.

SUMMARY AND CONCLUSION

It seems clear that a substantial proportion of the observed phenomenon of winning teams tending to convert at lower rates in their next game, and losing teams converting at higher rates can be explained by nothing more than regression to the mean, ignoring any contribution that might come from the nature of the V/AFL draw or other factors.

We can't entirely rule out other, non-random explanations for some or all of the phenomenon - it might be, for example, that heavier emphasis is placed on goal-kicking accuracy after losses than after wins, and that this focus reveals itself in the phenomenon we've observed.

In the end, it comes down to a judgement call about the relative contributions of random and non-random factors, but the fact remains that a large proportion of what we've observed could be explained largely by random factors alone.