2017 : Simulating the Final Ladder After Round 12

Every year I try to delay running ladder simulations until later in the season, and every year I wind up starting them earlier and earlier.

Still, this home and away season has been just too fascinating and closely-contested for anyone who follows the sport not to be curious about how it might finish. So, I bring you a first look at the 2017 simulations from an awfully long way out.


This year, projections for the remaining games will be made using the MoSHBODS System.


  • Teams' offensive and defensive ratings as at the end of the most-recently completed round will be used, along with the most-recent Venue Performance Values, to calculate expected scores for both the Home and the Away team.
  • In that calculation both teams' offensive and defensive ratings will be perturbed by a random amount based on drawing from a Normal distribution with mean 0 and standard deviation of 24. The value of 24 was determined by analysing the probability scores produced had the methodology been used to estimate finals chances in previous seasons, starting the simulations from different points in those seasons. The value of 24, while not optimal for every starting point, did well generally.
  • These expected scores will be converted into expected Scoring Shots using the average Scoring Shot conversion rate for all teams from 2016 (about 53%). If either team is expected to register fewer than 10 Scoring Shots, their expectation is set to exactly 10 Scoring Shots
  • In turn, these expected Scoring Shots values will be used to simulate the outcome of each remaining game in the season, using a model similar to the one derived in this 2014 blog, which comprises a random draw for each team first for Scoring Shots and then for a Conversion rate.

In this latest set of simulations, the entire remaining season was simulated 25,000 times.


The chart below, which I like to refer to as the MoS Dinosaur Chart, summarises the distribution of ladder finishes for the 18 teams.

There is, as we'd expect, still a lot of variability in the likely finishes for most teams, especially those sitting around mid-table.

We can also see that variability by viewing the same data in the form of a heat map where we find a lot of teams with 5-10% chances of finishing in a range of ladder positions.


This year, as I did in 2015, I'm going to use the results of the simulations to estimate the importance of each of the remaining games to the finals chances of every team. For this purpose I'll be first using a measure that I now know was proposed by Mark F Schilling and which defines the importance of game G to a team T as the change in that team's chances of achieving some goal  - here, making the finals - if the Home team wins that game compared to if the Home team loses. (For now I'll be ignoring draws).

To estimate this for a particular game and team using the simulation results we simply look at how often that team made the finals in all the simulation replicates where the Home team won the game being analysed, and compare this to how often it made the finals when the Home team lost that game.

The differences is those percentages are what's shown in the middle portion of the table below (which can be clicked on to access a larger version).

It shows, for example, that the West Coast Eagle's chances of making the finals differ by about 23% points depending on whether they win or lose Thursday night's game against the Cats (in which, the Eagles are the Home team).

We can see that games generally have most impact on the finals chances of the teams directly involved in them, but that there can also be secondary effects for other teams in the race for finals spots. For example, for that same game, Collingwood's chances decline by about 2% points if West Coast win as compared to if West Coast lose. They are, it can therefore be inferred, more likely to be in a battle for a finals spot with West Coast than with Geelong.

It's important not to read too much into small differences in any of the percentages shown here, especially for games where the Home team victory probability is very large or very small. Recall that we have only 25,000 simulations in total, so a game where (say) the Home team is assessed as having only a 10% chance of victory will provide only about 2,500 simulations on which to base the conditional probabilities for the "Home team wins" portion of the estimates. Those estimates will have standard errors as large as 1% point. Broadly, we should take more notice of differences that represent an order of magnitude than of those that might just be sampling noise.

On the far right of the table is a Weighted Average Importance figure, which is calculated for any game as follows:

  1. Calculate the probability weighted absolute change in a team's chances of making the final 8 across the three possible outcomes, Home win, Draw, Home loss. So, for example, if a team were a 27% chance of making the finals if the Home team won, a 26% chance if they drew, and a 22% chance if the Home team lost, and if the probabilities of those three outcomes were 70%, 1% and 29%, respectively, the calculation would be:
    70% x abs(27% - 25.5%) + 1% x abs(26% - 25.5%) + 29% x abs(22% - 25.5%) = 2.1%

    The 25.5% used in the calculation is the team's unconditional probability of making the finals, and the final number can be thought of as the expected absolute change in the team's finals chances after the result of this particular game is completed.
  2. Form a simple average of these expected absolute changes.

This final figure can be thought of as the average amount by which teams' finals chances will alter on the basis of the result of this game. Note that, in contrast to the 2015 methodology, I now form this average including all teams. That means there will be some teams whose contribution to the average is zero or nearly zero because their chances of making the finals are close to 0 or 1 regardless of the outcome of the game.

Small differences in the Weighted Average Importance figures for different games should also be treated with caution and larger differences paid more heed, which is why the final column seeks only to classify games on the basis of their Weighted Average Importance into five buckets.

Games with full circles alongside them are those games that are amongst the 20% most influential games (of which, since 96 games remain to be played, there are about 20). This includes three games from the upcoming Round 13.

Conversely, games with open circles alongside them are those games that are amongst the 20% least influential games, which includes the Port Adelaide v Brisbane Lions game this week.

That game provides a perfect example of why using weighted rather than raw changes in finals chances makes sense. Port Adelaide stands to suffer a 13% point reduction in their finals chances should they lose, which would otherwise seem to make this a relatively important game. But, Port is assessed as being less than 10% likely to lose, so the expected impact on its finals chances is quite small. In all likelihood then, this game won't have much bearing on the finals.

Next week, given time, I hope to run 100,000 simulations, which will lift the precision of the estimates in this analysis.