[h] home [b] blog [n] notebook

prediction markets are underrated

in october 2024, a few weeks before the US presidential election, the major polling aggregators had the race essentially tied. fivethirtyeight showed harris with a slight lead. the forecast models showed a 50-50 coin flip. and polymarket, the crypto-based prediction market, had trump winning at around 65%. this was embarrassing for the prediction markets, because a 65% probability implied a significantly non-tied race, and the polls were showing something close to a tie. who was right? turns out: polymarket. trump won. the prediction market had priced in something the polls hadn't.

i tell this story not to suggest prediction markets are infallible — they're not, and i'll get to that — but because the mechanism is worth understanding. a poll asks people what they think will happen. a prediction market asks people to put money on what they think will happen. these sound like the same question. they are not. the difference is consequences: when you're risking real dollars on your belief, you think harder, research more, and stop letting motivated reasoning run unchecked. prediction markets don't just aggregate opinions. they aggregate opinions with a skin-in-the-game filter applied, which systematically removes the laziness and posturing that make opinion aggregation unreliable.

the efficient market hypothesis has a lot of problems when applied to financial markets — stocks are clearly not always rationally priced, as anyone who watched GameStop can attest. but it works better in prediction markets because the thing being priced has a known resolution date and a binary outcome. "trump wins the election" either happens or it doesn't. there's no earnings growth or discount rate to disagree about. you're just trying to estimate a probability, and if you're systematically wrong, you lose money to the people who are right. that selection pressure, over time, pulls prices toward accuracy.

the reason prediction markets didn't take off earlier is regulatory. intrade, which ran in the early 2000s and was genuinely good at forecasting, got shut down by the CFTC in 2012. us persons couldn't participate in most offshore alternatives. the information value was real; the regulatory environment was hostile. polymarket changed this by building on polygon (a blockchain), making it difficult to ban US participation without banning the underlying blockchain, and operating in a regulatory gray area that the CFTC has not moved to close (as of this writing, though it's tried). the result is a global, permissionless prediction market with real liquidity — over a billion dollars in volume on the 2024 election alone. the crypto rails solved the regulatory problem by making the regulatory problem harder to enforce.

but prediction markets have real failure modes and i think they're underappreciated by people who've become fans. thin markets don't work. if only twenty people are trading a contract, the price is just the average of those twenty people's opinions, which is not much better than a small survey. liquidity is load-bearing for the mechanism: the whole point is that informed traders can profit by correcting mispriced contracts, and they'll only show up if the contracts are large enough for their bet to matter. polymarket on the presidential election had deep liquidity. polymarket on "will a specific country pass a specific piece of legislation this year" often does not.

long-dated, vague questions also produce noise rather than signal. a contract on "will AGI arrive by 2030" is mostly measuring how much people are willing to pay to hold an interesting conversation topic rather than any actual probability assessment. nobody has edge on that question; the market price just reflects sentiment and risk appetite. compare this to "will the fed cut rates in december" — a near-term, precisely defined, heavily researched event — where the market price is genuinely informative.

there's also a manipulation risk that got visible in 2024. someone with a large position can move prices, which creates a narrative, which gets reported in the news, which can affect public sentiment. whether this is actually happening at scale is genuinely unclear — manipulation is expensive because you have to buy the position and hold it, and rational arbitrageurs will trade against you — but the concern isn't imaginary.

the most interesting future application isn't elections. it's internal corporate decision-making. google ran internal prediction markets in the early 2010s and got useful information about product launch timelines and market predictions. the mechanism is appealing: instead of your product manager confidently telling leadership that a project will ship on time, you have a market where employees can anonymously bet against that claim. real disagreement surfaces. the manager's optimism has to confront the whisper network. bullshit, as i said earlier, becomes expensive. this application hasn't taken off at scale, probably because corporate culture generally doesn't reward employees for publicly predicting that their company's projects will fail, even if doing so correctly would improve outcomes. the mechanism works when the incentives work. getting the incentives right inside an organization is its own problem.


i check polymarket and metaculus somewhat obsessively. the practice of making explicit probability estimates — not "i think X will happen" but "i put 60% on X" — changes how you reason about uncertainty. you stop being able to hide in vague language. you have a track record. you can see whether you're systematically overconfident or underconfident. it's useful even without money on the line, which is why metaculus (non-financial predictions) has a genuinely impressive calibration record among its active users. try it.