Matter of Stats

View Original

Efficacy of Game Statistics for Wagering

In a previous blog, I explored 23 team summary game statistics that are provided by the AFL - statistics such as disposals, one percenters, clangers, and so on - to see which, if any, of them, might help to predict game outcomes. Only four metrics were included in the final binary logit model I created: 

  • Behinds per game
  • Tackles per game
  • Inside 50s per game
  • Average Disposal Efficiency

These metrics, calculated for the season to date for both teams in a contest and then differenced, provided predictive information about a game's outcome over-and-above that provided by the TAB Bookmaker's own probability assessment, MAFL's MARS Ratings for the two teams, and the interstate nature of the clash.

In that earlier blog we quantified the impact of including these metrics by comparing the probability forecasts that would be made by a model incorporating them to a model made using only the TAB Bookmaker's probability assessments.

That calculation reveals that the fitted values are, indeed, different for the model that includes the four metrics compared to the more simplistic model, but it doesn't tell us if those value are, in any sense, better. To address that issue, today I'll quantify the value of those game statistics by measuring their effects in a wagering context.

Specifically, I'll calculate the returns to Kelly-staking on the basis of the probability estimates from the binary logit model described in the previous blog - the one that included the TAB Bookmaker probability estimates, the team MARS Ratings, the interstate status of the game, as well as the four game statistics.

As well though, I'll present the wagering results for a binary logit model that I've since developed that excludes the TAB Bookmaker probability estimates but still includes game statistics. The process of developing that model progressed along much the same lines as the process I described in the previous blog and arrived at a final model quite similar to the model arrived at previously, with the exception that the variable relating to the TAB Bookmaker's probability estimate was (deliberately) excluded and a further metric, the difference in YTD Frees Against, was included.

THE MODELS

Here's what the two models look like:

The model on the left is the same as the model from the previous blog, while that on the right is a completely new model that does not require any bookmaker input. It has a superior AIC but a slightly inferior Brier Score. Though I've not shown it here, it also predicts game results, using a 50% threshold, at a slightly lower rate.

Before I move on to analysing the wagering performance of these two models, there are a couple of things I'd like to note. Firstly, in the model on the right, the YTD Frees Against variable enters with a positive and statistically significant coefficient, which means that, all else being equal, you should prefer the team that has a history of giving away more free kicks. 

The Variable Importance column records that the YTD Frees Against variable contributes only 2% of the explained variance in game outcomes, however, so we probably shouldn't make too much of this finding. That same column also highlights the importance of past Inside 50s performance to the assessment of a team's current chances. That variable contributes 21% of the explained variance, which is almost as much as is contributed by each of the MARS Ratings. The YTD Behinds variable also contributes over 10% to the explained variance, reinforcing my previous comments about the importance of this metric and the fact that it is poorly reflected in other game statistics and, often, in prior game margins and hence MARS Ratings.

WAGERING PERFORMANCE

To create this next table I estimated the gains and losses that would have accrued to a punter who Kelly-staked on the basis of the fitted probabilities from these two models. Note that the results shown here are based on models fitted to the same period for which they are being assessed, so the wagering results are likely to be optimistic. Also note that drawn games have been excluded from the fitting and the assessment process.

With those caveats duly noted, it's still encouraging to see how well both models perform:

(By the way, both models have been fitted to all games from seasons 2006 to the end of Round 3 of season 2013. I've excluded the 2013 data from this table.)

Let's focus first on the results in the upper block, which pertain to the model including the TAB Bookmaker's probability assessment of the home team's chances. Wagering on home and away teams alike, this model turns in an ROI of +11% across the seven seasons, with positive returns in every year except 2006, and returns only slightly above breakeven in 2009 and 2010.

Even with the ability to calibrate itself to adjust for the empirical generosity in the Bookmaker's pricing of home teams, the model still finds itself generally unable to make a profit when wagering on away teams, so much so that eschewing the opportunity lifts its ROI to 16%. I think the only reasonable conclusion to come to about the repeated finding that wagering on away teams is a losing proposition is that the additional overround they tend to carry means that even well-calibrated models don't stand a chance.

So, wagering on the basis of this model and only when it suggested a wager on the home team would have led a punter to bet about 70 to 75 times a season with each wager about 11.5% of the Fund. Only in 2006 would a loss have been made.

Next let's look at the model for which we've ignored the information content in the Bookmaker's pricing.

The ROI for this model from wagering on home and on away teams is slightly higher than the ROI for the previous model, but the return comes on the back of 30% more activity. So, assuming the same unit size, using this model would have produced an almost 80% higher dollar return.

This model also fails to produce consistent returns from away team wagering, though it does produce a higher (ie less unprofitable) ROI across the seven seasons of just -3%. It also, in absolute terms, makes a smaller loss despite being considerably more active in terms of number of wagers and making larger average wagers.

A home alone policy would, therefore, also benefit a punter wagering on the basis of this model. Such a strategy would yield an ROI of 17%. The number of wagers made by such a punter would be only slightly higher than a punter using the first model (530 versus 498), but the average bet size would be almost 50% larger. As a consequence, the absolute return, assuming the same unit size, would be about 65% higher.

BOLDNESS OF PROBABILITY ESTIMATES

What's dissuaded me in the past from creating a Fund based on an algorithm that excludes information about bookmaker prices has been a fear that such an algorithm would too frequently produce probability estimates ludicrously different from the Bookmaker's.

The current algorithm, while it does occasionally differ markedly from the TAB Bookmaker in its assessments of the home team's chances, doesn't appear to make a habit of it, nor rate raging underdogs as sure-things or short-priced favourites as no-hopers:

About one half of the model's fitted probabilities are within 5% points of the TAB Bookmaker's assessment, and about three-quarters are within 10% points. Only about 10% differ by more than 15% points, and just 4% by more than 20% points.

As well, the blue line, which is a loess fit to the data, is very close to a line at 45 degrees. This means that the fitted model produces probability estimates that are quite well-calibrated, if we take the TAB Bookmaker's probabilities as the benchmark against which to measure calibration.

VARIABILITY OF RETURNS

It's the differences between the model's fitted probabilities and the Bookmaker's assessments, however, that drive the wagering activity, so it's conceivable that the returns to the model excluding bookmaker input, while superior, might also be more highly variable.

The following, final chart of the cumulative returns to the two models we've discussed in this blog, suggests that this is not the case:

CONCLUSION

On the basis of the analysis I've performed for this blog, developing a Fund algorithm incorporating year-to-date game statistics, and excluding Bookmaker price data, seems worth considering. As noted, the returns shown in this blog are almost certainly optimistic, but even a halving of the modelled ROI would still make for a viable Fund.

In practice, I doubt I'd wager using this Fund in the early parts of the season, because the year-to-date metrics will be more highly variable at that stage. There might be ways around this though that would still allow for early-season wagering, for example by constraining the maximum and minimum values that these variables could take on.

Feels like an exercise for the 2013 off-season.