All: HN has had many threads with generic arguments about how prediction markets are/aren't useless, casinos, social ills, and so on. It would be good to avoid that in this case, because OP is full of specific information and arguments. It deserves a less generic discussion.
It's fine, of course, to be for/against/etc. and have whatever view you have. Just please engage with the specific article. It will make for a less repetitive and (therefore) more interesting thread.
tech_ken 2 days ago [-]
One thing that really jumps out to me is the lack of a performance gap between the 90-day and 30-day resolution times. If 2-months of new information doesn't lead to materially improved forecast, then to me this seems to strongly reinforce the takeaway that these markets aren't really forecasting, so much as "the oracle is largely saying what other oracles already say, just updated faster." Am I misunderstanding the data here?
edit: I'm also going back to my bayesian theory days and would be super interested to see a deep dive into whether these markets are rationally updating their beliefs in time. My recollection is super vague here, but I recall something like non-transitive belief loops can lead to dutch-books (so like Johnny Punter things that Trump will win an election against Biden, Biden would win against Ross Perot, and Ross Perot would win against Trump). I'd like to know more about whether these kinds of issues are showing up in these markets?
ddp26 2 days ago [-]
Author here. Great point, and I think this is due to what another commenter points out, that the questions are different.
The right test of this is to take the _same_ markets that run for 90+ days, and check accuracy 90 days out vs 30 days out. I've done this on other prediction market datasets, though not on Kalshi and Polymarket, and found that forecasts are in fact more accurate 30 days out.
I agree that if they weren't, that would be incredibly suspicious!
crazygringo 22 hours ago [-]
On the other hand, both 90 days off and 30 days off are both still a very long time before an event happens.
It's going to highly depend on the type of event. But it's not surprising to me that there will often not be much difference, because the main factors affecting the event might not really start to be known until just a few days in advance.
Which really makes me wonder about in which categories our prediction markets most useful at a timeframe of months, versus which categories are most useful at a timeframe of days.
vcf 2 days ago [-]
Interesting read. Regarding the relationship between volume and accuracy, there need not be one in limit-order-book markets like Kalshi and Polymarkets. In theory, as long as quotes are accurate and adjust quickly to new information, there is no need (and no incentive) to trade since prices are efficient. This is the case in US equity markets: most price discovery occurs through quote updates, not through trades.
Studying prediction markets is one of my current research areas. In my latest paper (preprint at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6443103), we find that on Polymarkets, markets are, on average, quite accurate and unbiased. We did see a similar non-pattern between trade volume and accuracy, past a certain threshold.
ghaff 2 days ago [-]
I will definitely take a look. Anecdote but I’m familiar with one multi-year example of a small casual prediction market that seemed like a very good predictor and another one that I don’t have the data from that seemed effective as well over time. I’ve hypothesized why that might be the case but never came to firm conclusions.
torben-friis 2 days ago [-]
Given the request about engaging with this specific article:
>Ive thought hard about how to sell prediction markets to consumers. In 2020, I created Google’s current internal prediction market. Since then, I’ve served as the CTO of Metaculus, a non-market-based crowd-forecasting website, and now run FutureSearch, a startup that provides AI forecasters and researchers.
I feel like openly saying you professionally try to make people believe in markets reduced the impact of any further claim.
>Still, there is a benefit to speed. On March 11, 2026, the Financial Times reported that, upon news of Iran War escalation, the Polymarket odds of inflation at or above 2.8% rose to above 90%. This illustrated an immediate domestic impact to US foreign policy, which could influence the public in a way that updates months later from professional economists might not.
I don't understand the idea that this or similar predictions are of any value? "People strongly believe a war will worsen inflation" is information you could get anywhere and not necessarily based on any high quality decision making.
idiotsecant 2 days ago [-]
Its based on high quantity decision making, and quantity is a sort of quality if you squint and turn your head
dimbletimbers 2 days ago [-]
"Quantity has a quality all its own" —known economic genius, Joseph Stalin
cwmoore 2 days ago [-]
“More is different.” - Nobel laureate Phillip W. Anderson
I mean, someone can have a terribly bad ideology and still be right sometimes. In fact, that's almost always the case.
infinitewars 2 days ago [-]
It sounds like they should be called "indicator markets" rather than "prediction markets", as the data shows they largely just summarize the current knowledge, with little predictive ability.
ddp26 2 days ago [-]
It's true they are "just" summarizing current knowledge. But there are better and worse summaries of current knowledge!
Some summaries, like on some prediction markets, have objective accuracy that is much better than chance.
typs 2 days ago [-]
Sort of. Putting current knowledge into a number can be pretty interesting / useful though. Like many people, I read headlines and pay attention to what's happening in international politics, but from those it's hard to have any sense of how much reality there is to bluster in Iran/Panama/Venezuela/Greenland just from general discourse and media. For me, prediction markets have been very helpful in offering some sort of grounding beyond the general noise in areas where I have very little intuition or realistic sense of the possibilities.
You'll note from "Challenges and solution approaches" that it comes with significant caveats and is easily undermined.
crazygringo 1 days ago [-]
I don't understand the distinction you are making.
Obviously they are based on current knowledge. Nobody has any actual crystal ball.
But the outcomes are with regard to future events. So the correct term is predictions.
And they don't "just summarize the current knowledge". The whole point is that they better reflect the knowledge of people who presumably know better because they are willing to put their money where their mouth is, and ignore the vast majority of nonsense. That's not summarization. That's judgment. That's the whole point.
ghaff 1 days ago [-]
My sense is that, for prediction markets to work, there needs to be some real knowledge/analysis/judgement spread across at least a material subset of participants. Simply aggregating random guesses is likely no better than any given random guess.
Put another way there needs to be SOME signal buried in all the noise.
crazygringo 1 days ago [-]
And there is. That's why they work. Prediction markets are not simply aggregating random guesses. Were you somehow under the impression that they were?
rdiddly 15 hours ago [-]
The article does not conclude that "they work." Also, work at what?
ghaff 19 hours ago [-]
Not at all. Just saying that there needs to be knowledge/expertise/experience/etc. of some sort embedded beyond just a lot of people making random guesses and I’m not sure that wisdom of crowds always captures that.
crazygringo 11 minutes ago [-]
But there is, that's the whole point. The more correct and better information you have, the more incentivized you are monetarily to place a wager that will pay off. And the more incentivized you are to place a much larger wager.
Meanwhile, people just making random guesses are more likely to be placing small wagers that are also just canceling each other out.
So the knowledge, expertise, experience that you are talking about is absolutely embedded into this. That is the whole point.
gruez 2 days ago [-]
>as the data shows they largely just summarize the current knowledge, with little predictive ability.
What counts as "little predictive ability"? Do weather forecasts count as "predictions", or are they "indicators" too? Sure, they might have a more consistent track record, but then again weather is less susceptible to human interference than whatever happens in geopolitics within the next year. Prognostications about future climate might be less reliable, do those have to be downgraded to "indicators" too? On the flip side, prediction markets have a very good track record when forecasting certain events, such as interest rate decisions. Does that mean whether it's a "prediction" or a "indicator" depends on what you're forecasting?
cindyllm 2 days ago [-]
[dead]
tunesmith 2 days ago [-]
I recently tried to launch a site for friends and family that allowed people to make confidence predictions on various outcomes so they could track their calibration over time. It was like "I'm 84% certain Kansas City will beat Buffalo." I had a lot of fun with it since I'm a nerd about this stuff, and I actually demonstrably improved my calibration. But the only sources I could find for rapid repeatable bets were sports predictions. And I definitely did not want to include money or betting for all the annoying legal reasons. People had fun using it once for March Madness 2025 but traffic really dwindled after that. My conclusion was that the overall subject just wasn't inherently fun enough to do it without money involved, so I made the site dormant.
Getting better calibrated really is worthwhile, I just wish there was more of an appetite to do that without involving money.
phreeza 1 days ago [-]
Check out manifold markets, sounds like that is what you are looking for?
bo1024 2 days ago [-]
Nice article. One small comment, it's very hard to conclude anything about accuracy over time because the comparisons may not be apples to apples. For example if there used to be lots of questions about if it will rain in Boston and now there are lots of questions about if it will rain in Phoenix, it will look like predictions are getting more accurate, but the questions are just getting easier.
ddp26 2 days ago [-]
Author here. Agree, and I wrote in that section "Absolute accuracy is hard to compare across markets on one platform, and across platforms, because different forecasting questions have different difficulties. I addressed this by tracking similar markets on a single platform over time."
Even doing this, it's not apples-to-apples. One thing is, in this article, I filter only to "interesting" markets, so that controls for the % that are "easy" as you describe.
bo1024 1 days ago [-]
Thanks for the reply. Yeah, I think all of your filtering and categorizing makes these analyses really nice.
nbosse 1 days ago [-]
I think a fundamental problem is that the customer of a prediction market is the trader (gambler?), not the public. If you want accurate forecasts, you need sharp traders. If you want sharp traders, you need to pay them a lot. As a platform, the straightforward way to do that is to attract a large number of uninformed gamblers. And ultimately, the accuracy is not determined by volume, but by the fraction of informed and uninformed capital that is trading for idiosyncratic reasons uncorrelated to the "true" probability. Someone has to put in the effort to make the markets accurate, and that someone has to be paid and that money has to come from somewhere.
alembic_fumes 1 days ago [-]
> Someone has to put in the effort to make the markets accurate, and that someone has to be paid and that money has to come from somewhere.
The foundational idea of prediction markets is that this payment comes from the market itself. If you have a market full of suckers, the experts with real knowledge are incentivized to participate in the market to profit from the gap between what the market forecasts and their own expert forecast. This in turn will drive the market's prediction to be more accurate by incorporating the expert knowledge in directly form of the their "bets". In effect the market says put up or shut up to everyone who thinks they know better the market.
Viewed like this, prediction markets aren't much different from stock markets that also work by the premise of (active) investors claiming to know better than the market. It all follows from the efficient-market hypothesis.
All this said, I find the trend of prediction markets being used to predict what are effectively mostly random outcomes seems a bit strange. But on the other hand, the mere existence of such markets does provide financial incentive to come up with new, better ways to predict these outcomes. This itself would be very useful, at least as long as the subject is more akin to predicting the weather than the movements of a football.
Ultimately, I'm still an optimist when it comes to prediction markets.
mint5 23 hours ago [-]
“It’s clear that Polymarket and Kalshi host these markets to serve bettors, not to produce useful information. Take Kalshi’s “Best AIs this week?” markets, which not only cover too short a time period to be useful in any decision.”
Yeah the site is clearly optimized for betting at the expense of the purported purpose.
Looking at polymarket, wagers that could yield interesting odds are generally turned into meaningless junk by contrived timelines or other technicalities that aren’t at all related to the core thesis of the wager. I.e the flashy thing the wager talked about wasn’t the main contention of the odds, an artificially constrained timeline or other not useful technical detail is what the wagers hinge on.
It’s also strange how so few had advance cutoffs. The person with “info” is incentivized to hold their bet to the last minute, so even accepting the premise that the markets provide info, the wagers seem structurally designed to give that info when’s it’s too late to be of use. If the wanted to fix that it would be easy but they don’t. It’s as simple as forcing long run wagers to have a gap between the wager period and the resolution time point.
kqr 1 days ago [-]
> Try it yourself. Pick a topic that is important to you. Try searching Polymarket for probabilities, versus asking Claude about it. I wager you’ll prefer Claude’s take, even if it is less accurate. For one thing, Claude can speak to issues that are not properly resolvable forecasting questions.
I thought this was the very thing we wanted to avoid by creating reputation or money based prediction platforms rewarding statistical accuracy. We already have plenty of pundits speculating inaccurately about vague things they don't know much about.
We don't need AI to get more of that!
rpjt 2 days ago [-]
Random aside: I distinctly remember getting on a phone call with people from the SEC (US Gov't) with the goal of understanding if I could legally start a prediction market. This was during 2020 or 2021. I recall them saying basically "no way" and that it wouldn't be legal, and would be rife with abuse.
Fun times.
2 days ago [-]
HWR_14 22 hours ago [-]
Random aside: Prediction markets are illegal to be run under SEC jurisdiction. The CFTC (a different agency) allows and oversees them under their jurisdiction.
You called the wrong agency.
2 days ago [-]
NDlurker 2 days ago [-]
Should have just went for it
chadgpt2 2 days ago [-]
It's like Uber getting on a phone call with the city to ask if it's legal to run taxis that aren't taxis.
behole 2 days ago [-]
Too bad open bribes weren't as popular back then. A lil grease goes a long way these days (͡° ͜ʖ ͡°)
ghaff 2 days ago [-]
I dove into the prediction markets rabbit hole a number of years back. And I’ve personally seen witnessed scenarios where the wisdom of crowds seems to really work. What I have not really—including in this piece—read is rigorous theory of what makes it effective or not. There are hints here and in the Wisdom of Crowds book but I’ve never read a really comprehensive theory.
ddp26 2 days ago [-]
Author here. Hal Varian pointed me to this 1992 paper, which I think is still considered the canonical empirical piece on what is actually going on in trading behavior that leads to accuracy (or not): https://www.jstor.org/stable/2117471
chadgpt2 2 days ago [-]
Insider trading is a part of it. If someone bets a few billion dollars that America will invade Iran, the probability shoots up to 98%, even though nobody else thinks it will happen. They can then run a press release about how their platform predicted the invasion before anyone else did.
ghaff 2 days ago [-]
These were Oscar predictions and similar. So no insider trading and, when I wrote about, the prevalence of major prediction sites on the Internet seemed to degrade the crowd wisdom because so many people just went with what a few sites were picking.
junto 22 hours ago [-]
From a technical perspective how does one start to work with this historic prediction market data effectively?
Where do you source it from? What tools are most effective?
I’d love to see a more technical article on how that would work as someone who isn’t a data scientist.
I looked on Hugging Face and saw there was a 163GB historical Polymarket dataset. Is that a good place to start?
Beijinger 2 days ago [-]
Most people don't know, that "prediction markets" are acutally based on an idea by DARPA in 2002, after 9/11/2001.
jonahx 2 days ago [-]
Prediction markets, by any reasonable definition, existed long before 2002.
ddp26 2 days ago [-]
Yeah. People have put together a Prediction Market Database [1] (in a Google sheet), I think it's pretty well sourced and shows a good number of both real money and play money prediction markets from before 2002.
> Most people don't know, that "prediction markets" are acutally based on an idea by DARPA in 2002, after 9/11/2001.
Did they then use a time machine to go back to the mid-90's to pass the idea to Jim Bell so he could take the fall for some of the less attractive possible outcomes?
PedroBatista 1 days ago [-]
Good source.
The only complain I have ( not really directed at the article, but.. ) is to put all these theories and somewhat private experiments into the same room as pure gambling schemes turbocharged by "the algorithm" and political corruption.
While far from Heaven's gates, some guy trying to predict the price of corn next year is not in the same plane as those who had the "very original" idea every guy in his early 20s had at some point but never went further because he read some articles about "the law". Like it or not, the laws or the remnants of it were put in place due to the obvious degenerate attitudes and it's consequences gambling was always known for.
And no, it's not a "market", even Uber appears to have some usefulness to offset all the lying, corruption and criminality they had to do in order to become what they are. These ones don't even take you places other than gambler addiction.
End of run, sorry.
iammrpayments 1 days ago [-]
Did I just read a claude ad?
SamTinnerholm 1 days ago [-]
[dead]
7777777phil 20 hours ago [-]
[dead]
Rendered at 14:38:22 GMT+0000 (Coordinated Universal Time) with Vercel.
It's fine, of course, to be for/against/etc. and have whatever view you have. Just please engage with the specific article. It will make for a less repetitive and (therefore) more interesting thread.
edit: I'm also going back to my bayesian theory days and would be super interested to see a deep dive into whether these markets are rationally updating their beliefs in time. My recollection is super vague here, but I recall something like non-transitive belief loops can lead to dutch-books (so like Johnny Punter things that Trump will win an election against Biden, Biden would win against Ross Perot, and Ross Perot would win against Trump). I'd like to know more about whether these kinds of issues are showing up in these markets?
The right test of this is to take the _same_ markets that run for 90+ days, and check accuracy 90 days out vs 30 days out. I've done this on other prediction market datasets, though not on Kalshi and Polymarket, and found that forecasts are in fact more accurate 30 days out.
I agree that if they weren't, that would be incredibly suspicious!
It's going to highly depend on the type of event. But it's not surprising to me that there will often not be much difference, because the main factors affecting the event might not really start to be known until just a few days in advance.
Which really makes me wonder about in which categories our prediction markets most useful at a timeframe of months, versus which categories are most useful at a timeframe of days.
Studying prediction markets is one of my current research areas. In my latest paper (preprint at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6443103), we find that on Polymarkets, markets are, on average, quite accurate and unbiased. We did see a similar non-pattern between trade volume and accuracy, past a certain threshold.
>Ive thought hard about how to sell prediction markets to consumers. In 2020, I created Google’s current internal prediction market. Since then, I’ve served as the CTO of Metaculus, a non-market-based crowd-forecasting website, and now run FutureSearch, a startup that provides AI forecasters and researchers.
I feel like openly saying you professionally try to make people believe in markets reduced the impact of any further claim.
>Still, there is a benefit to speed. On March 11, 2026, the Financial Times reported that, upon news of Iran War escalation, the Polymarket odds of inflation at or above 2.8% rose to above 90%. This illustrated an immediate domestic impact to US foreign policy, which could influence the public in a way that updates months later from professional economists might not.
I don't understand the idea that this or similar predictions are of any value? "People strongly believe a war will worsen inflation" is information you could get anywhere and not necessarily based on any high quality decision making.
https://www.tkm.kit.edu/downloads/TKM1_2011_more_is_differen...
Some summaries, like on some prediction markets, have objective accuracy that is much better than chance.
Obviously they are based on current knowledge. Nobody has any actual crystal ball.
But the outcomes are with regard to future events. So the correct term is predictions.
And they don't "just summarize the current knowledge". The whole point is that they better reflect the knowledge of people who presumably know better because they are willing to put their money where their mouth is, and ignore the vast majority of nonsense. That's not summarization. That's judgment. That's the whole point.
Put another way there needs to be SOME signal buried in all the noise.
Meanwhile, people just making random guesses are more likely to be placing small wagers that are also just canceling each other out.
So the knowledge, expertise, experience that you are talking about is absolutely embedded into this. That is the whole point.
What counts as "little predictive ability"? Do weather forecasts count as "predictions", or are they "indicators" too? Sure, they might have a more consistent track record, but then again weather is less susceptible to human interference than whatever happens in geopolitics within the next year. Prognostications about future climate might be less reliable, do those have to be downgraded to "indicators" too? On the flip side, prediction markets have a very good track record when forecasting certain events, such as interest rate decisions. Does that mean whether it's a "prediction" or a "indicator" depends on what you're forecasting?
Getting better calibrated really is worthwhile, I just wish there was more of an appetite to do that without involving money.
Even doing this, it's not apples-to-apples. One thing is, in this article, I filter only to "interesting" markets, so that controls for the % that are "easy" as you describe.
The foundational idea of prediction markets is that this payment comes from the market itself. If you have a market full of suckers, the experts with real knowledge are incentivized to participate in the market to profit from the gap between what the market forecasts and their own expert forecast. This in turn will drive the market's prediction to be more accurate by incorporating the expert knowledge in directly form of the their "bets". In effect the market says put up or shut up to everyone who thinks they know better the market.
Viewed like this, prediction markets aren't much different from stock markets that also work by the premise of (active) investors claiming to know better than the market. It all follows from the efficient-market hypothesis.
All this said, I find the trend of prediction markets being used to predict what are effectively mostly random outcomes seems a bit strange. But on the other hand, the mere existence of such markets does provide financial incentive to come up with new, better ways to predict these outcomes. This itself would be very useful, at least as long as the subject is more akin to predicting the weather than the movements of a football.
Ultimately, I'm still an optimist when it comes to prediction markets.
Yeah the site is clearly optimized for betting at the expense of the purported purpose.
Looking at polymarket, wagers that could yield interesting odds are generally turned into meaningless junk by contrived timelines or other technicalities that aren’t at all related to the core thesis of the wager. I.e the flashy thing the wager talked about wasn’t the main contention of the odds, an artificially constrained timeline or other not useful technical detail is what the wagers hinge on.
It’s also strange how so few had advance cutoffs. The person with “info” is incentivized to hold their bet to the last minute, so even accepting the premise that the markets provide info, the wagers seem structurally designed to give that info when’s it’s too late to be of use. If the wanted to fix that it would be easy but they don’t. It’s as simple as forcing long run wagers to have a gap between the wager period and the resolution time point.
I thought this was the very thing we wanted to avoid by creating reputation or money based prediction platforms rewarding statistical accuracy. We already have plenty of pundits speculating inaccurately about vague things they don't know much about.
We don't need AI to get more of that!
Fun times.
You called the wrong agency.
Where do you source it from? What tools are most effective?
I’d love to see a more technical article on how that would work as someone who isn’t a data scientist.
I looked on Hugging Face and saw there was a 163GB historical Polymarket dataset. Is that a good place to start?
DARPA did have a big role though, too.
[1] https://docs.google.com/spreadsheets/d/1vGjnJPxdnBKwag3Q9Uy_...
Did they then use a time machine to go back to the mid-90's to pass the idea to Jim Bell so he could take the fall for some of the less attractive possible outcomes?
The only complain I have ( not really directed at the article, but.. ) is to put all these theories and somewhat private experiments into the same room as pure gambling schemes turbocharged by "the algorithm" and political corruption.
While far from Heaven's gates, some guy trying to predict the price of corn next year is not in the same plane as those who had the "very original" idea every guy in his early 20s had at some point but never went further because he read some articles about "the law". Like it or not, the laws or the remnants of it were put in place due to the obvious degenerate attitudes and it's consequences gambling was always known for.
And no, it's not a "market", even Uber appears to have some usefulness to offset all the lying, corruption and criminality they had to do in order to become what they are. These ones don't even take you places other than gambler addiction.
End of run, sorry.