This year's Finals scheduling has made this topic highly relevant again, so it seems timely to update that analysis to include data from the intervening years and to incorporate some of the improvements I've made to estimating team ratings in that same period.
In the 2012 analysis I used teams' MARS Ratings to estimate what their expected performance would be if we did not account for the days' rest they'd received between games, and found that an average team did about 2.25 (ie 1.35 - 3.60) points worse than expected for every one fewer day of rest it had enjoyed relative to its opponent.
For this updated analysis I'll again start the analysis from 2000, but I'll be making the following changes:
- Measuring expected outcomes using, in turn, MoSSBODS and MoSHBODS Ratings and Venue Performance Values (instead of MARS Ratings). Note that, by including Venue Performance Values, we are implicitly adjusting for any interstate travel that either of the teams needs to undertake.
- Rather than designating one team "home" and the other "away", which can be problematic in Finals, I'll be selecting for each Final, at random, which team's perspective I'll adopt when measuring performance relative to expectation. For example, were I to choose West Coast as the team whose perspective I'd adopt in their recent Final against Collingwood, my Excess Performance variable would be West Coast Score - Collingwood Score - Expected Margin according to (say) MoSSBODS from West Coast's perspective.
This approach considerably simplifies the interpretation of the resulting statistical model.
- With the introduction of the bye week prior to Week 1 of the Finals in recent years, the gap between the last home and away game and the subsequent Elimination or Qualifying Final seems less relevant. Accordingly, we'll be excluding Elimination and Qualifying Finals from the analysis.
Given those changes we wind up with 91 Finals that we can use - five from each of the years 2000 to 2017 plus one for the Grand Final Replay in 2010.
We'll again fit Ordinary Least Squares models as we did in 2012, but the model will now simply be:
Excess Performance = Intercept + k x Days' Rest
where Excess Performance = Actual Margin - Expected Margin from the viewpoi.nt of the team selected, and Days' Rest is also for the team whose viewpoint has been selected.
The fitted models are summarised in the table at right.
Both models show a positive effect for each additional days' rest. Using MoSSBODS to estimate expected performances, the size of the effect is about 2.3 points per day, while using MoSHBODS it's about 3.1 points per day. The p-values of the co-efficient estimates provide evidence for the hypothesis that the true effect sizes are non-zero, though stronger in the case of the model using MoSHBODS' opinions.
(Technical note for the curious: The non-zero intercepts reflect the fact that an average team during this period received about 8.3 days rest between Finals. For the MoSSBODS model, Intercept + 8.3 times the Days' Rest coefficient is -0.6, while for the MoSHBODS model it's -0.8, both of which are close to zero. Logic suggests this should be the case because the average signed difference between actual and expected scores when we choose one or other team's perspective at random is exactly equal to zero.)
We again find evidence of a non-zero effect for extra days rest in the Finals, which these models suggest amounts to about 2.3 to 3.1 points per extra day.
To put that in context, if we assume that two teams are perfectly evenly matched and so would be expected to draw in the absence of differential rest periods, giving one team an extra days' rest lifts its victory probability from 50% to about 53% if we assume that the one day of extra rest is worth 2.5 points.
That's not a huge effect - losing a key player could be worth 2 to 4 times as much in terms of points - but it is a sometimes avoidable thumb on the scale, however small.