The topic of team conversion rates - the proportion of Scoring Shots that teams convert into goals - and their predictability has come up before here on MoS. When we've explicitly attempted to predict conversion rates our focus has been on the rate for a particular team in a particular game - for example in this post from 2014, where we wound up explaining only about 3% of the variability in rates.
We've also looked more generally at the effects of Venue and Era on conversion rates (in this post too), compared conversion rates in Finals versus those in Home and Away games, and compared conversion rates for underdogs versus those for favourites.
Today I want to adopt a whole-of-season approach and ask the simple question: to what extent can a team's conversion rate in a season be explained by its conversion rate in the previous season? So, relative to that earlier post where we were looking at game-to-game conversion rates, we'll take advantage of the fact that some of the ephemeral affects might be controlled for by aggregating across games in a single season.
For the purpose of this analysis I'm going to use all of the data for seasons 1897 to 2015, and I'm going to define a team on the basis of the name it carried in the seasons that it played, which means, for example, that when Footscray became the Western Bulldogs, we have no conversion rate to use for the Bulldogs' first season (though, to be honest, the decision of whether or not to carry teams' records into subsequent seasons when their name changes is of minimal consequence).
Now the first issue we need deal with is the trend behaviour of conversion rates, by which I mean that the overall conversion rate in a particular season is highly correlated with that in the previous season (see chart at right). Such trend behaviour will tend to elevate the correlation between conversion rates for all teams from one season to the next since, as the saying goes, a rising tide lifts all boats. We need a method for removing this trend effect.
So, to adjust for this factor we'll analyse each team's conversion rate in a season relative to the all-team average for that season. My question then becomes: to what extent can a team's conversion rate relative to its peers be explained by its conversion rate relative to its peers in the previous season.
It turns out that the answer to this question varies a little depending on the era of VFL/AFL football we consider, though the correlation is not especially high in any era. It's lowest - indeed slightly negative - for the period 1897 to 1919, and highest for the period 1940 to 1959, though even there it's only +0.31, which means that less than 10% of the variability of a team's conversion rate in any season in that era can be explained by its conversion rate in the previous season. In short, the season-to-season correlation in team conversion rates is low and only mildly positive (except in the early seasons), regardless of the era we look at.
What about teams then? Do some display higher levels of autocorrelation than others?
Gold Coast (+0.55) shows the highest correlation, but its is based on only a handful of seasons. Hawthorn's correlation of +0.48 is based on a much larger number of observations and suggests that they, most of all amongst the teams with long histories, have tended to string together back-to-back seasons of above- and below-average conversion rates.
After the Hawks we find a cluster of teams with correlations in the +0.2 to +0.3 range: North Melbourne, Fremantle, South Melbourne, Essendon, Kangaroos, Melbourne, Port Adelaide and Footscray.
Then come teams with correlations in the 0 to +0.19 range: Collingwood, Richmond, Fitzroy, Sydney, Carlton, St Kilda, West Coast, Adelaide and Geelong, the latter's correlation so near zero it might as well be zero.
Lastly come the five teams with negative correlations, signifying that they tend to flip from above- to below-average conversion rates from one season to the next: Brisbane Lions, Western Bulldogs, GWS, Brisbane Bears and University.
So, Hawthorn aside, all teams with reasonably long histories show near-zero correlations in their standardised conversion rates from one season to the next. It remains the case then that conversion rates, however modelled, appear to behave as if they are selected at random. The Hawthorn Anomaly is an interesting exception, but I've no basis on which to explain it. As always, I'd appreciate your hypotheses.