Team Ratings, Bookmaker Prices and the Recent Predictability of Finals

Last weekend saw three of four underdogs prevail in the first week of the Finals. Based on the data I have, you'd need to go back to 2006 to find a more surprising Week 1 of the Finals and, as highlighted in the previous blog, no matter how far you went back you wouldn't find a bigger upset than Port Adelaide's defeat of the Pies.

I've two bases for making that assessment of surprise: Bookmaker prices, for which I've data that I trust only since 2006, and All-Time MARS Ratings, which are my own and which I therefore trust implicitly for every year since 1897. YMMV.

Here's what that data looks like:

R24 - MARS Ratings of Finalists - List by Game.png

We know from an earlier blog that Ratings superiority affords less of an advantage in Finals and, even adjusting for this fact, that Finals generally are harder to predict than home-and-away contests.

Today I'm exploring which types of Finals have posed the greatest challenge.

FINALS TYPE ANALYSIS

Regardless of whether you look at MARS Ratings or Bookmaker Prices, it's the Elimination and Grand Finals that seem to have been the greatest source of surprise across the 14 seasons I've included.

Looking first at Elimination Finals we see that, over the period, Bookmaker favourites have won only slightly more often than half the time, and the team with the higher MARS Rating has won only a few percentage points more often than that.

A better strategy than either of those would be selecting the team finishing higher on the home-and-away competition ladder or selecting the "Higher MARS Adjusted" team as I've called it, this being the team Rated higher on MARS if you add 18 Ratings Points (RPs) to the "home" team (ie the team finishing higher on the competition ladder). Adopting either of these strategies still leaves you selecting the wrong team almost 40% of the time.

Grand Finals have proven the most difficult type of Final of all to predict, with even the best predictors only tipping at a rate slightly above 50%.

Qualifying Finals have been more predictable, with the Bookmaker favourite winning almost 70%, the higher-Rated team and the team finishing higher on the competition ladder winning just over 70%, and with the higher "MARS Adjusted" team winning three-quarters of the time.

In Semi-Finals and Preliminary Finals selecting the team from the higher ladder position or the team with the higher adjusted Rating would have, as in Elimination Finals, been solid strategies - though you could have done even better in Preliminary Finals by going with the Bookmaker favourites.

This year aside, looking at the season by season picture we see that 2007 proved most difficult for someone basing his or her predictions solely on the team with the higher MARS Rating, that 2005 and 2006 were most challenging for someone relying solely on the "Adjusted MARS' predictions, that TAB favouritism was most unreliable in 2005 and 2006 as well, and that basing predictions on competition ladder data was least reliable in 2003, 2005 and 2006.

Across the entirety of the period considered, the best strategy would have been to select winners on the basis of adjusted MARS Ratings. This would have secured you a 73% record, marginally better than the 72% record you could have achieved by simply taking the team with the higher ladder position in every game.

DOES DIFFERENCE IN MARS RATING MATTER?

Predictive accuracy is, of course, influenced by the closeness in the abilities of the combatants in each of the Finals. Games pitting teams of similar abilities are harder to predict than those pitting teams of starkly disparate abilities.

To provide some information about whether this factor might have had influenced predictive accuracy, this next table records the average MARS Ratings of the teams in each of the Final types and in each of the seasons.

Across the 14 seasons, the average Ratings gap between the teams playing in Elimination Finals is only about 16 RPs, which is about 5 RPs smaller than the average Ratings gap between the teams playing in the Qualifying Finals. That probably has some influence on the different rates at which the higher-Rated MARS teams win Elimination Finals relative to the rate at which they win Qualifying Finals. 

In Semi-Finals we see the smallest average differential in the Ratings of the competing teams, and a concomitant reduction in the success rate of the team with the higher MARS Rating, much as we saw in Elimination Finals. However, once we adjust MARS Ratings for ladder position - or simply select the team that finished higher on the ladder - we do much better. What's more, we do far better following these strategies in Semi Finals than we do in Elimination Finals, despite the average MARS Ratings differences being about the same in both cases.

Put simply, the losers of the Qualifying Finals, which all come from the Top 4 Finalists, are significantly more likely to prevail over the winners of the respective Elimination Finals than a pure assessment of their relative MARS Ratings might suggest. Ladder position, and the risk of frittering away the hard work in securing a Top 4 finish, really matters in Semi Finals.

A cogent argument could be made, I'd suggest, that the team from the Top 4 has far more to lose in a Semi-Final than a team that finished somewhere in the bottom half of the Finalists. That's why, I think, Top 4 teams have won at such an extraordinarily high rate. Interestingly, Bookmakers have not made sufficient allowance for this phenomenon.

Preliminary Finals tend to pit teams of very different MARS Ratings and tend to witness victories by the more highly-Rated (and more Bookmaker-favourited) teams.

In Grand Finals, the average difference in MARS Ratings is about 16 RPs, but we again find that this difference is only mildly predictive of the outcome. Only 50% of teams with the higher MARS Rating (base or adjusted), and 58% of Bookmaker favourites have been successful in Grand Finals. (These rates increase a little if you include the Grand Final Replay of 2011.) In Grand Finals, victory is determined mostly, it seems, by things other than pure MARS Rating or end-of-season ladder position.

Looking, instead, and finally at the year-by-year view, we find that 2006 was the year in which the average difference in the MARS Ratings of the teams in every contest was lowest. The Finals of 2009 pitted teams that were, on average, only slightly more different in abilities.

The Finals of 2001 paired teams of the most disparate abilities.