This week Collingwood faces Sydney having played its Semi-Final only 6 day previously while Adelaide take on Hawthorn a more luxurious 8 days after their Semi-Final encounter. The gap for Sydney has been 13 days while that for the Hawks has been 15 days. In this blog we'll assess what, if any, effect these differential gaps between games for competing finalists might have on game outcome.
(Thanks to Sean for suggesting this analysis in the comments to an earlier blog.)
To conduct the analysis for this blog I've collected data for all Finals going back to 2000, specifically:
- The MARS Ratings of each team
- The Ladder Position of each team as at the end of the relevant home-and-away season
- The TAB Bookmaker prices of each team
- The Days Since Previous Game data for each team
- The Venue Experience of each team
- The Interstate Status of the game from the viewpoint of the team that finished higher on the ladder in the home-and-away season. (NB For the purpose of this analysis I've deemed the team finishing higher on the ladder to be the "home" team in each final)
- The Scores of each team
In total, data for 115 games is available including that from the 6 finals that have been played this year.
The goal is to estimate the signs and sizes of the effects of extra days' rest on team performance, which we'll measure as the final game margin expressed from the point of view of the home team (ie home team score minus away team score). In obtaining the estimate for days' rest we need to control for the fact that it is in the nature of the AFL Finals system for teams finishing higher on the ladder to earn larger breaks between games, especially between the Qualifying and the Preliminary Finals - just as we've seen this year with the roughly 2-week breaks for the teams finishing 1st and 3rd on the ladder. Variables that could provide such a control include the Ladder Positions and the MARS Ratings of the competing teams. Statistically, it turns out that MARS Ratings do a better job.
It's also conceivable that team performance could be related to the finalists' Venue Experience or to the Interstate Status of the contest, but none of these variables attained statistical significance and so they have all been excluded from the final model, which appears in all its simplistic glory at left in the table below.
The model reveals that, after controlling for the relative quality of the participants using MARS Ratings, every extra day's rest for the home team adds 1.35 points to its victory margin, while every extra day's rest for the away team reduces the home team victory margin by 3.6 points. Only the second of these coefficients is statistically significant.
This model explains a little over 22% of the variability in home team victory margins across all 115 of the sample Finals. The difference in the absolute size of the coefficients on Home MARS Rating and Away MARS Rating reflects the magnifying effect that we've seen in Finals of even small differences in relative team strengths. These coefficients suggest that every 10 Rating Points (RPs) of difference in team strengths translates into 3 points in the ultimate game margin. Partly as a consequence of this magnifying effect, the model predicts a 34-point victory by the Swans, who enjoy a 13 RP advantage over the Pies this week, and a 48-point victory by the Hawks, who enjoy a 38 RP advantage over the Crows.
Do Bookies Know About This?
Another variable that could be used, instead of MARS Ratings, to control for the disparity in the relative strengths of the competing finalists is the TAB Bookmaker head-to-head prices, expressed as an implicit victory probability for the home team. This probability, however, might already account for the benefit or detriment that teams enjoy or suffer as a result of having shorter or longer breaks, which is why I chose not to use this variable initially when the goal was to measure the size of these benefits or detriments.
Having now established that days' rest does matter using a control for team strength that I can be certain includes no hint of days' rest, if we now substitute the Bookmaker's implicit home team probability for the team MARS Ratings we can assess the extent to which Bookmaker prices include a component for these effects. This model appears on the right in the table above and suggests that:
- Bookmaker prices do include a component to account for the differing days' rest for finalists, but they do not fully account for the effects (the coefficients retain their signs but become smaller in absolute size)
- MARS Ratings do a better job of explaining game margins in Finals than does the implicit bookmaker home team probability (evidenced by the smaller R-squared in the model on the right). I did wonder if this result was due to my including in the models the years prior to 2006, for which my Bookmaker data is less reliable, but rerunning the models for the period 2006 to 2012 confirms the superiority of MARS Ratings.