# Start-of-Season Team Ratings and Historical Flag Prospects

For today's blog, a simple assignment: investigate the historical relationship between team MoSSBODS Ratings at the start of the season and their subsequent ability to make the Grand Final, and to win it.

(In those seasons where no Grand Final was played we'll deem that season's Premier and Runner Up as being the teams that made the notional Grand Final.)

Our data then will be:

• the initial MoSSBODS Ratings (Offensive, Defensive and Combined) for each of the 1,386 teams that have participated during the 119 completed seasons of VFL/AFL football
• information about whether or not they subsequently made the Grand Final or won the Flag (actually or notionally, as appropriate)
• the number of teams that participated in the competition in the year we're considering

WINNING THE FLAG

To investigate the relationship between Ratings and Grand Final success, let's fit three binary logits, one each for the Offensive, Defensive and Combined Ratings data.

We obtain the following:

• Pr(Win Flag)/(1-Pr(Win Flag)) = -2.183 + 0.612 x Initial Offensive Rating - 0.066 x Number of Teams - 0.0017 x Number of Teams x Initial Offensive Rating
• Pr(Win Flag)/(1-Pr(Win Flag)) = -2.277 + 0.746 x Initial Defensive Rating - 0.054 x Number of Teams - 0.0171 x Number of Teams x Initial Defensive Rating
• Pr(Win Flag)/(1-Pr(Win Flag)) = -2.506 + 0.489 x Initial Combined Rating - 0.057 x Number of Teams - 0.0056 x Number of Teams x Initial Combined Rating

All three models have coefficients with the signs we'd expect, higher Rated teams being assessed as more likely to win the Flag, and a larger number of teams in the competition reducing every team's prospects of a Flag.

Now these models can be used to estimate three different Flag probabilities for any team given its initial Offensive, Defensive and Combined Rating. So, let's use them to estimate probabilities for the teams of 2016. (Note that these estimates do not take into account the vagaries of the unbalanced schedule and how the fact that some teams play others only once might help or hinder their prospects.)

The blue line in each block charts fitted likelihoods from the relevant binary logit for a range of plausible Rating values. The red dots project the 2016 teams onto those fitted lines, based on each team's start-of-season MoSSBODS Ratings (which are available in this blog post).

I've used the term "relative likelihood" rather than "probability" in these charts because the fitted probabilities for 2016 sum to slightly less than 1 - about 0.9, in fact - when totalled across the 18 teams. The ratio of the likelihoods for any pair of teams can be thought of as the ratio of the probabilities for those two teams. If you want actual probabilities, mentally multiply the likelihoods by about 1.1.

The teams' ordering on likelihood is, of course, the same as their ordering on Rating, but the binary logit formulation serves to stretch out likelihoods more in some Rating ranges than in others. So, for example, on the left we see that West Coast's Combined Rating implies a fitted likelihood that is almost 60% higher than that for the next most-likely team, Hawthorn, even though their Combined Ratings differ by only 1.4 Scoring Shots (SS). There's then another considerable decline in likelihood until we reach the third-highest team, Sydney.

In the middle is the chart based on Defensive Ratings. Here we see a smaller difference between the West Coast and Hawthorn likelihoods (0.016), based on a 0.36 SS difference in their underlying Ratings, and the elevation of Fremantle from mid-pack on Combined Rating to third-place on Defensive Rating.

Lastly, on the right is the chart for Offensive Ratings, which still sees the Eagles in the top spot, but now sees Adelaide easing past Hawthorn into second. In terms of likelihood, West Coast's lead over Adelaide is just less than 0.05.

As we can see, each model gives different estimates of each team's likelihood of winning the Flag. Forced to choose one, that based on Combined Rating seems the most natural selection, incorporating as it does both Offensive and Defensive components.

That model would give a rough market for the Flag as follows:

• West Coast \$3.85
• Hawthorn \$6
• Sydney \$13
• Kangaroos \$14
• Richmond \$19
• Fremantle \$25
• Geelong \$28
• Collingwood \$33
• GWS, St Kilda \$70
• Melbourne \$100
• Essendon \$125
• Brisbane Lions \$150
• Gold Coast \$200
• Carlton \$250

### MAKING THE GRAND FINAL

We can perform a similar analysis, replacing the outcome of winning the Flag with that of making the Grand Final, starting once more by constructing three binary logits.

• Pr(Make GF)/(1-Pr(Make GF)) = -1.348 + 0.709 x Initial Offensive Rating - 0.0613 x Number of Teams - 0.0119 x Number of Teams x Initial Offensive Rating
• Pr(Make GF)/(1-Pr(Make GF)) = -1.309 + 0.765 x Initial Defensive Rating - 0.0699 x Number of Teams - 0.0150 x Number of Teams x Initial Defensive Rating
• Pr(Make GF)/(1-Pr(Make GF)) = -1.553 + 0.512 x Initial Combined Rating - 0.0628 x Number of Teams - 0.0074 x Number of Teams x Initial Combined Rating

Here again, the signs of the coefficients in the models are what we'd expect, with higher Ratings and fewer teams increasing a team's prospects of making the Grand Final.

Again, when we use these models to estimate likelihoods for the teams of 2016 we find that the probabilities don't sum to the ideal, which here is two, there being two spots available in the Grand Final. If you want to make mental adjustments you need to know that the probabilities from the Offensive model sum to about 1.8, from the Defensive model to about 1.7, and from the Combined model to about 1.9.

Broadly speaking, there's not a lot of difference between the models for making the Grand Final and for winning the Flag, though the stronger teams see their likelihoods not quite doubled and the weaker ones see theirs more than doubled. The team orderings in each section are, of course, unchanged.

Using once again the Combined model we can frame a rough market for making the Grand Final as follows:

• West Coast \$2.25
• Hawthorn \$3.20
• Sydney \$6.25
• Kangaroos \$6.50
• Richmond \$9
• Fremantle \$11
• Geelong \$12
• Collingwood \$15
• GWS, St Kilda \$30
• Melbourne \$40
• Essendon \$50
• Brisbane Lions \$65
• Gold Coast \$85
• Carlton \$100

To reiterate, neither this market nor the earlier one for the Flag take into account each team's draw, but they do serve as a very rough guide to each team's chances based solely on what MoSSBODS Ratings imply. Not for a moment would I recommend them as a sole basis for wagering.

Comment