Matter of Stats

View Original

The Ten Most Surprising Things I've Learned About AFL So Far

The last few months have been a generally reflective time for me, and with my decision to leave unchanged the core of MAFL algorithms for 2014 I've been focussing some of that reflection on the eight full seasons I've now spent analysing and predicting AFL results.

Here then - in reverse order, as tradition demands - are the 10 most surprising things I've learned so far:

10. The winning rate in the full home-and-away season for any VFL/AFL team in history can be inferred using an equation with just two coefficients and four scoring-related terms (two if you'll let me form differences first)

Back in 2011 I wrote about Win Production Functions and found that the following equation explained about 90% of the variability in team home-and-away season winning rates across seasons 1897 to 2010.

See this content in the original post

Far from diminishing in relevance post the original sample, this equation has done a startlingly good job of explaining winning rates for each of the last three seasons as well, providing correlations between the predicted and actual winning rates for all teams of +0.97, +0.97 and +0.93 respectively.

It's not all that jaw-dropping, I recognise, that a team's success should be related to its ability to generate and convert more scoring shots than its opponents. What is amazing though is that the equation relating these metrics to overall success has been so stable and so predictive over such an extraordinarily long period of time.

9. Across entire seasons, winning teams' share of scoring has been remarkably stable since early in the 1900s

In this post from 2009 I charted the share of scoring enjoyed by winning teams in every season and found that winning teams generally were responsible for:

  • About 60% of all goals
  • About 55% of all behinds, and
  • About 58% of all scoring shots

Again, more recent history has mirrored the relationship found on earlier data: in 2013, winning teams secured 61% of all goals, 56% of all behinds, and 58% of all scoring shots.

I don't know what it is about the nature of AFL football, but there seems to be something inherent in the structure, the rules or the practice of it that leads to these regularities. Either that or I've uncovered the longest, most intricate and most pointless of sporting conspiracies ever perpetrated.

8. The TAB bookmaker is exceptionally good at estimating the likelihood of game outcomes (aka his job)

I could choose any number of blog posts to support this contention, but the chart in this post showing the TAB bookmaker's average calibration error across seasons 2006 to 2012 is as good as any. It implies that his estimates of the victory probability of home teams carries an average calibration error of only about 5% points across the entire range of home team probabilities.

Frankly, that's extraordinary. Outside of contrived situations where the true probabilities are constant and known or virtually so, such as in the toss of a coin or the roll of a dice, I can't think of a single repeated event type where the probability can differ markedly from one realisation to the next for which I could provide probability estimates and be confident that, for example, the event would occur between 70 and 80% of the time when I assessed its likelihood at 75%.

There's ongoing debate about whether bookmakers seek to predict outcomes in their own right or, instead, merely move prices in response to (and maybe anticipation of) market sentiment so as to ensure a positive return regardless of a game's outcome. The truth is probably that most of them do a little of each - after all, any initial market must be set without wagering history to inform it - but however they make their assessments they're remarkably good at making them.

One interesting way of testing a statistical model is to see if it would make or lose money fielding wagers from a bookmaker based on his implicit probability assessments but priced using the model's opinions. It's hard to make money gambling, even harder when you're forced to wager on every contest.

7. The bookmaker's pre-game assessment of a team's victory chances become less informative quite quickly as a game progresses

The assessment of team chances in-running is another topic I've explored on multiple occasions in MAFL posts. In this post from late 2012 I used a variable importance technique applicable to binary logistic models to explicitly assess the information content at each change of:

  • the TAB bookmaker's pre-game head-to-head prices for the two teams (expressed as log-odds) 
  • the home team's lead at the end of all completed quarters

Essentially what I found was that the information content of the pre-game bookmaker prices halved at every change, from a notional 100% at the start of the game, to 50% at Quarter Time, 25% at Half Time, and about 12% at Three-Quarter Time.

Though I've nothing but anecdotal evidence to support the claim, my strong feeling based on observing in-running prices during the course of a number of games is that these markets rate the chances of trailing favourites far too highly for far too much of the game. People appear to expect the favourites to "come good" rather than expecting the underdogs to continue to perform "above themselves".

Even if that's not the case though, and markets do in fact accurately reprice on the basis of the in-running score, I'm still claiming that the rate of dilution of the information content of pre-game bookmaker prices is surprising to me.

6. Some game statistics are surprisingly uncorrelated to game outcomes

In a blog from 2013 Andrew exposed the weak to non-existent relationship between a team's on-field success and the number of:

  • free kicks it receives
  • free kicks it concedes
  • one percenters it records
  • handballs it delivers
  • tackles it makes

Other oft-quoted game statistics such as Contested Possession counts are only barely more predictive.

The problem with most raw game statistics, I'd contend, is their lack of context. Not all free kicks, tackles and handballs are born equal, but they all count as one in the tally regardless of whether they were recorded on the boundary in the back pocket or in front of goal. Ultimately, as in the property market, it's all about location - that's what determines the impact a game statistic has on the game outcome.

Lending weight to this argument is the high correlation between team outcomes and Inside 50s, and between team outcomes and Marks Inside 50, as Andrew revealed in that same blog. These metrics, by their very nature, have their locational importance built in; no team has ever recorded an Inside 50 while camped in its own goal-square.

(Incidentally, if you'd like to brush up on your AFL terminology, this page from Wikipedia should help.)

5. Simple heuristics can make passable predictions

It was, I think, some time in 2009 when I first read about Gerd Gigerenzer and his work on heuristics, about which, of course, there is now a TED talk. He'd demonstrated, in a variety of areas, the efficacy of very simple but, as he calls them, smart rules of thumb in making inferences and predictions, so it seemed natural to apply the principles he described to the practice of tipping AFL games.

And so were born the 2009 Heuristic Tipsters. Over the period 1999 to 2008, the heuristic tipsters that I devised predicted outcomes in the home-and-away season correctly at average rates of between about 55 and 60%. Over that same period, the bookmaker data I had implied that favourites won about 64% of the time (which actually, as I'm sure I've said before, suggests an excellent heuristic of its own. In many seasons you'd have out-tipped a large number of the newspaper pundits simply by tipping the pre-game bookmaker favourite in every contest - a boring, but effective, strategy.)

The MAFL Heuristic Tipsters employed rules of thumb of limited complexity, from very simple rules such as "pick the team that's currently higher on the competition ladder" to the not much more complex "pick the team with whom you have the better season-long tipping record". Despite this apparent handicap they still predict almost as many winners as the TAB Bookmaker.

In fact, in 2009 a few of the Heuristic Tipsters actually out-tipped the TAB Bookmaker, and in 2010 almost all of them repeated this feat.

With the introduction of Gold Coast in 2011 and the resulting need for byes in some weeks of the competition, I rebuilt the heuristics to deal with this change, details of which you can read about in this PDF. In the three seasons that have followed the rebuild, in none has a Heuristic Tipster out-tipped the TAB Bookmaker. That's been partly, I'd suggest, due to the TAB Bookmaker's success in predicting winners in these years - he's tipped at 76%, 78%, and 72% in those seasons - but I do wonder if it's also been due to the way in which I've handled byes for heuristics that depend on winning and successful tipping streaks. As it stands, byes end a streak; revisiting that assumption is currently on my notional MAFL To-Do list.

Regardless, the Heuristic Tipsters have continued to predict winners at a rate that would not embarrass them if they participated in tipping competitions replete with humans.

4. The best predictive algorithms specialise

Absent empirical evidence I'd have assumed that a statistical algorithm that excelled at predicting game margins would also excel at predicting, say, line market winners.

That's demonstrably not true though, at least for any of the models I've constructed, which is why the MAFL Fund's all have their own algorithm, each optimised in the assessment of teams' chances for a distinct wagering market.

I can't find a blog post where I've described the relative performance of algorithms optimised for predicting one AFL outcome metric when applied to another outcome - perhaps that's a topic for a future post - but I've encountered the phenomenon of domain-specific excellence so often now in my AFL modelling that I think I'm simply inured to it and haven't thought it worth writing about. In my own extremely insignificant way it turns out that I'm contributing to publication bias and the file-drawer effect.

(In 2014, the two new ChiPS Predictors will provide confirmatory or contradictory evidence for my contention that algorithms must specialise since both of these Predictors are fundamentally based on a single team Rating algorithm optimised to predict game margins (C-Marg) and, with the addition of a single additional parameter, optimised to assess head-to-head probabilities (C-Prob).)

3. The assumption that bookmakers levy overround equally on all outcomes is pervasive - and possibly wrong

The challenge of inferring the probabilistic leanings of the TAB Bookmaker from his head-to-head market prices is one I faced early on in MAFL. Most of the articles I could find on the topic that appeared on apparently reputable sites assumed, explicitly or implicitly, that bookmakers levied overround equally (for example, this post on the soccerwidow site is archetypical), so I took that assumption as a given for many years. Occasionally I'd come across a posting, often on a betting forum and implying that a deeper knowledge might be available to the chosen few, hinting that the assumption of equal overround was erroneous (for example, see this one). None I could find was ever specific about what the correct assumption was, or even how to estimate it for yourself given sufficient data, time and motivation.

A few blog posts of my own such as this one in 2012 had me wondering about the empirical evidence for the "overround equalising" assumption as far as AFL and the TAB bookmaker was concerned and led, in the truly roundabout but linear-in-retrospect way in which these things progress, to a series of posts in late 2013 culminating in this one, where I described a more general framework for thinking about bookmaker overround.

As a result I now recognise three methodologies for inferring probabilities from head-to-head bookmaker prices: overround-equalising, risk-equalising and probability score-equalising. Forced to choose a single basis on which to infer TAB bookmaker probabilities from head-to-head prices in the AFL, I'd opt for the LPSO-Optimising approach described in this blog, which follows a probability-score optimising methodology, though the Risk-Equalising approach also appeals to me from an aesthetic point of view. The clunky 1.02% term in the LPSO-Optimising approach offends my sense of mathematical aesthetics. (Still, maybe it's really 1% with a rounding error ...)

Whatever the true basis on which overround is levied, and it's doubtful there'd be only a single basis anyway, I find it odd that the topic isn't given more coverage in the literature I've found.

2. Team composition, and other factors outside of team class, team form and game venue, has only a small influence on game outcome

From the time I started wagering on the AFL I've hoped to create a profitable algorithm that didn't use bookmaker pricing information as an input. I've come close to developing a model with these characteristics on a couple of occasions, but I've never yet been convinced of the efficacy of these models when tested on a holdout sample of games.

Notwithstanding, I am genuinely astonished about how close we can come to predicting about as well as the TAB Bookmaker with very limited amounts of data. I've already written above about the Heuristic Tipsters, which have occasionally out-tipped the TAB Bookmaker in head-to-head tipping across entire seasons, and in this blog back in 2011 I described how a model using only game venue and some very basic measures of team form could tip at rates well in excess of chance.

More convincingly perhaps, this 2010 blog found that a simple model based on TAB Bookmaker prices explained only 1.5% more of the variability in game margins from seasons 2006 to 2009 than did a similarly simple model based solely on MARS Ratings. 

Using more recent data still, and models optimised for the 2006 to 2013 period:

  • A Probability Predictor created from the TAB Bookmaker head-to-head prices (using the LPSO-Optimised variant, which sets the Home team probability equal to 1/Home Team Price - 1.021%) records an average log probability score (LPS) of 0.1949 per game.
  • A Margin Predictor created from these same probability predictions by assuming they come from a Normal distribution with an optimised standard deviation of 33.4 points per game produces a Margin Predictor with a Mean Absolute Error (MAE) of 28.92 points per game and an R-squared of 34.7%
  • An optimised Margin Predictor based solely on MARS Ratings and an assumed constant HGA - the equation is 0.75 x Home Team MARS - 0.74 x Away Team MARS + 5 - produces an MAE of 29.56 points per game and an R-squared of 32.1%
  • The ChiPS Predictor C-Marg produces an MAE of 29.35 points per game and an R-squared of 34.0% for this same period, and C-Prob produces an LPS of 0.1941 per game.

In summary, comparing the relevant ChiPS Predictors to those derived from the TAB Bookmaker prices, the prices information adds only 0.7% to the explained variability in game margins, lowers MAE by less than half-a-point per game, and produces an only fractionally superior LPS.

Don't get me wrong, those differences (except perhaps that for LPS) are practically material in the context of wagering, but the implication remains that the masses of additional information available to the TAB bookmaker allows him to explain less than 1% of additional variability. It's true also that the ChiPS Predictors are highly optimised and, consequently, risk having been overfit to recent history, but, clearly, much of the information content relevant to predicting AFL game outcomes resides in the result history of the competing teams and the venue at which the contest is being played.

On average, all the other factors, which includes who's playing and who isn't, contributes only at the margin to the outcome of a typical game.

1. A surprising proportion of the variability in game outcomes is unexplained and probably unexplainable

As we've just seen, even the best Margin Predictor of all, an optimised one based on the TAB Bookmaker's pre-game head-to-head prices, explains only about 35% of the variability in game margins. That leaves 65% unexplained.

Variability in game margins has two sources: variability due to the differing relative abilities of the teams as we look across games, and variability that can be attributed to 'on the day' factors and inherent uncertainty in the outcome of a particular game.

It's reasonable, I think, to assume that the TAB Bookmaker will do a good job in assessing and incorporating in his prices that first source of variability (a blog on this is in the planning stages), which implies that the majority of the remaining, unexplained variability is due to factors of the second kind: things that were inherently unpredictable before the game started or that were down to idiosyncratic, bounce-of-the-ball incidents.  

To me, a finding that almost two-thirds of the variability in a set of typical game results can be attributed to things we couldn't know before the contest, and perhaps even during it, is I think, like many other of the findings reported in this blog, truly remarkable.

****

If you've come across other surprising results on the MatterOfStats site or in your own analyses or observations of AFL, please feel free to leave a comment on this blog post or to send me an e-mail via the address shown at the top of the navigation bar on the right.