On the Relative Importance of Offensive and Defensive Ability in VFL/AFL History

In the previous post here I introduced MoSSBODS 2.0, a Team Rating System design to provide separate Offensive and Defensive Ratings for teams in the VFL/AFL. Today I want to explore the relationship between teams' Ratings and their on-field success.

Firstly, in the table below, we summarise the home-and-away season and Grand Final performance of the teams that finished with the highest Overall, Defensive, or Offensive Ratings at the end of their respective season.

We see, for example, that across the period 1897 to 1925, the team finishing the season with the highest Overall Rating won the Flag 48% of the time, and finished Runner Up 31% of the time. That means the highest Rated team made the Grand Finals almost 80% of the time. This proportion has increased over each subsequent era, reaching 86% across the period from 1986 to 2015. 

The third and fourth columns in each block summarise how often the ultimately highest Rated team finished as Minor Premiers or as Runners Up in the home-and-away season. These show that, in every era, the highest Rated team was more likely to play in the Grand Final than to have finished 1st or 2nd on the ladder. Although we should recognise that there is some endogeneity in the analysis here - in that the act of making the Grand Final is likely to have lifted a team's Rating relative to the end of the home-and-away season - this result nonetheless provides some support for the notion that Grand Finals generally do a good job of rewarding deserving teams.

In the middle block of data we have the same percentage calculations, but now performed for the team that finished the season with the highest Defensive Rating, which may or may not be the same team that finished with the highest Overall Rating. These teams have, in the most-recent era, won the Flag 47% of the time, made the Grand Final 70% of the time, finished as Minor Premier 43% of the time, and finished as Runner Up in the home-and-away season 17% of the time.

The last block of data completes the analysis by providing data for the teams that finished the season with the highest Offensive Rating. In every era except the first, these teams have been more likely to play in the Grand Final, but in every era they've also been less likely to win it. Similarly, they've been Minor Premiers slightly less often and finished 2nd in the home-and-away season slightly more often than the teams finishing with the highest Defensive Rating. So, if you want to make the Grand Final, you'd rather be Ranked 1st on Offence, but if you wanted to win it, you'd rather be Ranked 1st on Defence.

As well as reviewing where the highest Rated teams finished in the competition we can also analyse where teams that had success in the competition were Rated at the end of the season.

This data, provided in the table below, shows that, for example, Premiers have been one of the Top 2 Rated teams Overall in 83% of the 119 seasons. Runners Up have come from the Top 2 52% of the time, and have been the 3rd-, 4th- or 5th-highest Ranked team 42% of the time.

The Premiers have been Ranked Overall outside the Top 2 in only 17% of seasons. The two most-recent Premiers in this position have been:

  • Adelaide in 1997, who finished Ranked 4th having ended the home-and-away season in 4th and having scraped past the Dogs in the Preliminary Final by just 2 points.
  • Essendon in 1993, who also finished Ranked 4th. They were Minor Premiers but with a 13-1-6 record, matched by the Blues in 2nd, and only just ahead of the 13-7 records of the teams finishing 3rd and 4th.

Three other Premiers finished Ranked only 6th Overall, Carlton in 1945, Collingwood in 1958, and Carlton again in 1972. That Ranking looks a little harsh for all of these teams, though the 1958 Collingwood and 1945 Carlton teams both had unspectacular home-and-away seasons.

The Overall Ranking pattern of Minor Premiers and of the Runners Up in the home-and-away season is quite similar to the pattern just discussed for Grand Final performances, although the Minor Premiers are slightly more likely to finish the season Ranked outside the Top 2 places.

Comparing the data in the middle and right-hand blocks we see that Premiers and Minor Premiers are more likely to be Rated in the Top 2 on Defence than on Offence. For Runners Up overall and in the home-and-away season, the Ranking profiles on Offensive and Defensive abilities are very similar.

Another instructive way of assessing the relative importance of Offensive and Defensive ability to competition outcomes is to calculate how often Grand Finalists and teams finishing 1st or 2nd at the end of the home-and-away season ended the season with their Offensive Rating higher than their Defensive Rating. 

In the most-recent era we find that, for the first time, Premiers are more likely than not to be Rated higher on Offence than on Defence, while the opposite is true for Runners Up. So, while we found earlier that you'd rather be Ranked 1st on Defence than 1st on Offence if you wanted to win a Grand Finals, you'd nonetheless prefer your Offensive Rating to be higher in absolute terms than your Defensive Rating.

Interestingly, in this same, most-recent era, Minor Premiers have tended to finish the season Rated higher on Defence than Offence, while teams finishing 2nd at the end of the home-and-away season have tended to finish the season Rated higher on Offence than on Defence.

The analysis so far has been heavily dependent on the outcome of a single game (ie, the Grand Final) and on the performance of just a few teams from each season (ie the Premiers, Runners Up, Minor Premiers and Runners Up from the home-and-away season, which might be as few as just two teams in some years). In this next chart we include every game from every season, asking for each season what proportion of games were won by the team with the higher Overall, higher Defensive, and higher Offensive Rating.

Dotted lines track the actual percentages while the solid lines track 5-year non-centred moving averages of that same data, removing some of the variability and highlighting the trends. The macro features of this chart reveal that:

  • Teams with Overall Rating superiority over their opponents have, in almost every season, won at a higher rate than teams with either Offensive or Defensive Rating superiority.
  • For the majority of the last 40 years, Defensive superiority has led to higher rates of winning than Offensive superiority. Offensive superiority has led to higher winning rates in 1979, 1982, 1991, 1993, 1996-2000, 2002 and 2006, and equal winning rates in 1983, 2011 and 2014.
  • Across all 119 seasons, in 56% of them Defensive superiority has led to higher rates of winning than Offensive superiority. In another 7.5% it's led to equal rates of winning.

Lastly, we might want to track these same statistics, but consider only Finals. Here, as we'd expect with much smaller sample sizes, we find greater variability, but nonetheless also find an advantage in Defensive superiority, especially in football's early history.

  • Across all of history, Defensive superiority going into Finals has led to higher winning rates than Offensive superiority in 36% of seasons, and has led to equivalent winning rates in another 34.5%.
  • Over the past 30 seasons, however, in 8 seasons Defensive superiority has led to higher winning rates than Offensive superiority, and in 8 more it's led to equivalent winning rates. 

In summary, whilst Offensive superiority might serve a team best if it's made a Grand Final, at every other point in the season Defensive superiority is likely to be more beneficial.