March Madness
Every year, ESPN has a men's and women's tournament challenge. (There are $135k in prizes to be won, but ONLY for the men's tournament. Go figure.) What I learned from my new friend is that AI models were able to predict the winners in this year's tournament with an accuracy of 93%. Getting it 100% correct will win some cash, so there's definitely an incentive to get that last 7%!
His comment that really struck me is that an upset is not *really* an upset. It's usually bad seeding. That is, when a #12 team beats a #5 team it's usually because the #12 team should have been seeded higher and the #5 team should have been seeded lower, but the selection committee got it wrong. AI models don't make those mistakes. They're able to analyze the game stats for every team and predict a winner based on the matchup. Basketball is particularly well-suited for this because it's a high-scoring game.We discussed whether ESPN will ban AI-generated brackets from the contest or create a separate category for them. We're both pretty sure that if they banned them, no one who wrote a model accurate enough to get the bracket 100% correct would pretend that they didn't just to win a contest. There's way more money in AI modeling than annual tournament contests! That's not the interesting part, though. This is:
What happens when these models get so good that everyone, including the coaches and players, know in advance who is supposed to win? Wouldn't that influence the outcome? If your team was supposed to win, perhaps you'd play a little lackadaisically. On the other hand, if your team was supposed to lose, perhaps you'd play with a chip on your shoulder. Ultimately, can predictive AI models end up failing because they end up influencing the very thing they're trying to predict? Now THAT would be madness!

Comments
Post a Comment