Who's Best? It's All About The Base(s)

Simple question: which of MoS' 17 Margin Predictors has been best-performed over the past two seasons? If Mean Absolute Error (MAE), the absolute difference between the predicted and actual margin, is your preferred metric then you'd rate Combo_7 best because its combined MAE for the two seasons is lowest of all the Predictors at just 27.8 points per game.

Instead, if accuracy was all that mattered to you - that is, selecting the correct winner, regardless of the size of the predicted or actual final margin - then either of Win_3 or Win_7 would be your pick because they both out-tipped every other Margin Predictor across the two seasons, finishing with 298 correct prediction from 414 games, which is an impressive 72% record.

Looking beyond just the best-performed Predictors we find quite a lot of variability in rankings based on these two metrics. Win_7 finishes 9th on MAE but (as noted) 1st on Accuracy; Win_3 finishes 8th on MAE and also 1st on Accuracy; Bookie_3 finishes 6th on MAE but 14th on Accuracy; while Combo_7 finishes 1st on MAE but 7th on Accuracy. Overall, the rank correlation for these metrics across the two season combined is just +0.31 - and for 2013 alone, the correlation is actually negative.

That isn't new news I know, but it's still worth reminding ourselves about this possibility from time to time. The notion of best is rarely absolute.


Accuracy and MAE measure a Predictor's performance relative to some fixed outcome measure, either who wins or by how much. What if we were to, as an alternative, pit the Margin Predictors against one another on a head-to-head basis as a more direct way of assessing their relative merits.

Now Accuracy is a fairly boring head-to-head metric since a Predictor only scores when it selects the winning team and its matched Predictor doesn't, so let's, MAE style, use the absolute error metric as the arbiter of talent. Under this measure, Predictor A defeats Predictor B is a contest if its predicted margin is nearer the final result. Note that, under this methodology, a Predictor can be closer than another even if it predicts the wrong winning team and its opponent does not. For example, if Predictor A tips the Swans to win by 3 and Predictor B tips the Cats to win by 20, a Cats win by 2 points would be a win for Predictor A on the absolute error metric because its absolute error is 5 points compared to Predictor B's 18 points.

The table below shows the winning rates for the 2013 season for every pairing of Predictors, with rows designating the Predictor for whom the percentage applies. So, for example, Combo_7 (C7) was nearer the final margin than Bookie_3 (B3) in just over 55% of contests in 2013.

The best head-to-head record belongs to RSMP_Weighted (RSMPW) who produced a better than 50% performance against all Predictors except Combo_NN2 (CNN2). That sole defeat by CNN_2 is slightly surprising as RSMP_Weighted recorded an MAE more than 1.8 points per game less than Combo_NN2's. Combo_NN2, it seems, had something of a "There Was a Little Girl" performance in 2013. When it was good, it was ... you know the rest. 

Other Predictors to perform well on this metric in 2013 were RSMP_Simple and Combo_7, who both had 14 and 2 records against the other Predictors, and Bookie_LPSO and Combo_NN2, who both had 13 and 3 records. 

The rank correlation between teams' head-to-head performances and their MAE ranking was +0.90 in 2013, while that between their head-to-head performances and their ranking on Accuracy was -0.31. Win_3 and Win_7 were significant contributors to that curious negative correlation, finishing joint-second on Accuracy but 13th and 11th on the head-to-head metric.

Things were a little more sensible in 2014 where these same correlations were +0.77 and +0.78.

Even 2014 threw up its anomalies though. Win_3 defeated all 16 other Margin Predictors head-to-head despite finishing only 7th on MAE and 4th on Accuracy. As well, Win_3's stablemate, Win_7, went 13 and 3 head-to-head but finished only 8th on MAE. 

Equally as startling, Bookie_3 defeated only 4 of its 16 rivals but recorded the 5th-best MAE, while Bookie_LPSO was defeated by 5 rivals after finishing the season 2nd on MAE.

Combining the two seasons brings some additional stability to the rankings, the rank correlation between MAE and the head-to-head records coming in at +0.92, with no team being ranked more than 4 places differently on the two measures. RSMP_Simple and Combo_7 have the best combined head-to-head records, both having seen off 26 of their 32 opponents across the two seasons. Bookie_LPSO and Bookie_9 both have the next-best records of 24 and 8, while ProPred_7 is comfortably last with a record that's a mirror-image of the leaders' at 6 and 26.


I'll finish the blog today with an assessment of the MoS Margin Predictors using another head-to-head metric. This one's based on the scoring used in the FMI Tipsters League (an exceptionally well-run competition which I'm hoping to join in 2015).

In the FMIT League, Predictor A, when facing Predictor B in a particular game, scores:

  • 1 point if it tips the correct team and is nearer the correct margin than Predictor B. (Note that a Predictor tipping the wrong team is never assessed as being nearer regardless of the respective margins predicted. A predicted win by 100 points for a team that wins by 1 point is therefore deemed nearer the correct result than a prediction of the loser to win by 1 point.)
  • An additional point if its prediction is within 3 points of the actual final margin
  • A further point if it is also predicts the exact final margin

It's possible, therefore, for a Predictor to score a maximum of 3 points in a single game. Picking the wrong team always results in a score of 0, while the score from picking the correct team depends on the relationship between the Predictors' margin predictions and the actual result.

Where two predictors have margin predictions that are identical or equidistant from the actual final margin, both score points as appropriate.

In the FMIT League, predictors tip only in whole numbers, so in applying the FMIT League scoring system I've first rounded all margin predictions to the nearest integer. Where this results in a prediction of a draw, for simplicity's sake I've made the margin prediction either -1 or +1 depending on the sign of the original prediction. If the original prediction was for an exact margin of 0 I've made it +1.

The 2013 FMIT head-to-head records are provided in the following table in which the entries reveal the net FMIT score across the 207 games for the relevant pairing. Bookie_3, for example, scored 16 more points than Bookie_9 under FMIT scoring in 2013.

RSMP_Weighted excels under this method, defeating 14 of 16 opponents, some of them by 40 points or more. Combo_7, who was ranked 4th on MAE and 2nd on the earlier head-to-head, nearer-the-pin metric, manages only an 8 and 8 record here, which ranks it 10th.

Win_7, by comparison, defeats only 5 opponents on the nearer-the-pin metric, ranking it 11th, but conquers 12 on the FMIT scoring methodology, ranking it 4th. No other Predictor is ranked more than 4 places differently on the two methods.

In 2014 there are also just a couple of Predictors whose ranking on the FMIT scoring methodology is very different to its ranking on the nearer-the-pin approach. Bookie_3, with a 4 and 12 nearer-the-pin record, ranks 13th on that metric, but goes undefeated on the FMIT scoring methodology and so, of course, ranks 1st.

Bookie_9 suffers the opposite fate, its 12 and 4 nearer-the-pin record, which ranks it 3rd, lying in stark contrast to its 5 and 11 FMIT scoring record, which ranks it 14th.

Overall though, since the number of cases of vastly different rankings is small, in both years there's a reasonably high rank correlation between MAE rankings and FMIT scoring rankings: +0.80 in 2013, and +0.61 in 2014. When the two seasons' results are combined, the correlation comes in at +0.71.


In the table that follows I've summarised the performance of all Margin Predictors on all four of the metrics discussed in this blog, separately for 2013 and 2014, and combined.

The overall picture is one of broad agreement but there remain gentle reminders of the need to think deeply and carefully about the performance metric that's appropriate in a given analysis. Even in the simple analyses performed for this blog, 5 of the 17 Predictors have claims to calling themselves best Predictor on the basis of the two-year combined performances:

  • Bookie_3 is best on the FMIT Scoring metric
  • RSMP_Simple and Combo_7 are joint-best on the nearer-the-pin metric 
  • Win_7 and Win_3 are joint-best on the Accuracy metric
  • Combo_7 is best on the MAE metric

Since no clear winner emerges from that comparison, why not average the rankings across the four metrics? If we do that, a 6th contender arises since Bookie_LPSO's average ranking of 3rd is lowest of all.

In short, there's no single answer to the question of which Margin Predictor is best - it all depends on context and purpose.