It’s long past time that I stood back and assessed how well the model is doing at predicting the outcome of the Season 10 contest.
The Idol Analytics model has been run since the Top 11, where I cobbled together a regression analysis (with some serious issues). The next week I produced the model more or less as it is today. So, counting since the Top 11, here I’ll show how well the model did. For comparison purposes, I’ve also listed the person with the lowest number of votes predicted by several different sources:
- DJ Slim (idolbloglive)
- MJ Santilli (MJ’s Big Blog)
- Votefair, a polling website
- What Not To Sing
Here are the results:
|Round||DJ Slim||MJ Santilli||Votefair||Dialidol||WNTS||Idol
|Top 11 redux||1||1||2||0||1||1||Thia, Naima|
There have been 9 lowest-vote-getters in the contest since the Top 11. This is, of course, one more than there would normally be, but Casey Abrams got the lowest number of votes in the first Top 11 round, and was saved, and that should be counted. A 1 means that the site guessed correctly, or a 2 in the case of the double-elimination week, and a 0 means it missed. Dialidol got Casey (both times) and Stefano. Votefair got Thia and Naima, but missed every week until Jacob. And so on.
The Idol Analytics model guessed 4 out of 9 correctly, which is as good as the other highest, DJ Slim. All the other services had 3 a piece. (The expected value from random guessing is 1.27, or about 0.14 accuracy.) Note that the model would have predicted Ashton and Karen correctly, had it been around for the Top 13 and Top 12 round. But so did almost everyone else.
So this site is a bit better than most experts, and better than the other measurement services, by about 10 percentage points. Ok. However, there is clearly still a lot of room for improvement.
For one thing, my model is slightly worse than the other prediction services at choosing the bottom 3/2. Its overall accuracy was about 53%, identical to Dialidol and worse than Votefair or WNTS. (The latter had an astounding 70% accuracy!) Were I to take the time, it would probably be worth it to check for correlations between a contestant being in the bottom 3 and after having already been there. This would have possibly picked up Haley’s frequent trips there. The reason this is a good idea is that it starts to correct for the fact that some contestants (particularly women) under-perform their quality. That is to say, they get lower votes than they should have.
Secondly, the model could try to take into account the effect of performance order on outcome. Creating a scoring model that follows the overall elimination trends of performance order would make sense.
Finally, it would probably be a very good idea to sort contestants into categories, such as “Rocker” and “Country Bumpkin” for the purposes of accounting for over-performance. This would no doubt have excluded Scotty McCreery from a couple projections, where his score clearly indicated he would be in the B3, but there was no chance of it happening.
Each of these elements is worthwhile, but the task of putting them in will be time consuming. Also, as one starts to “refine” a model, he runs the risk of over-fitting it, which I don’t want to do. One can see why this could happen by way of example: last night James sang in the pimp spot, but was eliminated. This means that to the extent that the model was right about that (he was projected as only slightly less likely than Scotty to go), this adjustment would make it worse.