PREDICTWIRE · LIVEGavin Newsom win the 2028 Democratic presidential nomination: 28% ▲ 0.4Atletico Madrid win the 2025–26 Champions League: 12% ▼ 0.2the San Antonio Spurs win the 2026 NBA Finals: 15% ▲ 0.1Iran x Israel/US conflict ends by April 7: 87% ▲ 0.8Gavin Newsom win the 2028 US Presidential Election: 17%Netherlands win the 2026 FIFA World Cup: 3% ▼ 0.1the Colorado Avalanche win the 2026 NHL Stanley Cup: 23% ▲ 1.1J.D. Vance win the 2028 Republican presidential nomination: 39% ▲ 0.8the U.S. invade Iran before 2027: 30% ▼ 2.0PREDICTWIRE · LIVEGavin Newsom win the 2028 Democratic presidential nomination: 28% ▲ 0.4Atletico Madrid win the 2025–26 Champions League: 12% ▼ 0.2the San Antonio Spurs win the 2026 NBA Finals: 15% ▲ 0.1Iran x Israel/US conflict ends by April 7: 87% ▲ 0.8Gavin Newsom win the 2028 US Presidential Election: 17%Netherlands win the 2026 FIFA World Cup: 3% ▼ 0.1the Colorado Avalanche win the 2026 NHL Stanley Cup: 23% ▲ 1.1J.D. Vance win the 2028 Republican presidential nomination: 39% ▲ 0.8the U.S. invade Iran before 2027: 30% ▼ 2.0

New — Built on this archive

Live — The Wire Signal board

See today’s sharpest Wire Signal divergences across every active Polymarket contract on /signals/ — our live scoreboard of YES VALUE, NO VALUE, and leaning markets, updated on every publish.

Receipts — the scoreboard

Curious how Wire has actually done on resolved markets? /receipts/ scores every case-study market: what the Wire Signal called at close, and what resolved. No cherry-picking.

The Wire Score is PredictWire’s calibration-adjusted probability and A–D confidence grade for every active prediction market, fit on the 194,111-snapshot data below. Across the full archive, Wire Score v2 improves Brier over raw prices by +4.86% (with separate calibration curves per category), and sorts snapshots into four grades whose Brier scores differ by an order of magnitude. Read the methodology →

Do prediction markets actually work? We built the PredictWire Calibration Archive — the first public, reproducible dataset measuring Polymarket’s real-world forecasting accuracy — to answer that question with numbers instead of opinions. Here is what 1,914 resolved binary markets representing $30.48 billion in trading volume tell us.

Headline Numbers

Markets analyzed 1,914
Total volume $30.48B
Daily price snapshots 194,111
Brier score (closing price) 0.0865
Brier score (volume-weighted) 0.0665
Brier skill score vs base rate +0.544
Directional accuracy (majority pick wins) 85.7%
Confident-prediction accuracy (≥80% or ≤20%) 97.1% (n=1,341)

The One Number

If you want a single statistic that captures how well prediction markets work, it is this: when Polymarket prices a contract at 80% or above, or 20% or below, the market is correct about the direction of the outcome 97.1% of the time. Across 1,341 confident predictions in our archive, confident calls resolve in the market’s favor almost 24 times out of 25.

That is the ball game. Prediction markets, when they commit, are right.

Calibration: The Reliability Table

Accuracy alone doesn’t mean the markets are calibrated. A calibrated forecaster is one where, when they say “70%,” that outcome actually happens 70% of the time. Here is the closing-price calibration table for every resolved binary Polymarket market with ≥$100k in lifetime volume:

Predicted Probability Markets Mean Predicted Actual Yes Rate Gap
0-10% 1,086 0.9% 1.2% +0.3%
10-20% 75 15.1% 28.0% +12.9%
20-30% 83 25.0% 25.3% +0.3%
30-40% 86 34.9% 39.5% +4.6%
40-50% 130 45.1% 49.2% +4.2%
50-60% 129 54.9% 47.3% -7.6%
60-70% 90 64.7% 63.3% -1.4%
70-80% 56 74.4% 75.0% +0.6%
80-90% 33 85.4% 87.9% +2.5%
90-100% 146 98.2% 99.3% +1.1%

Read this table like a report card. The “Mean Predicted” column is what the market said would happen; the “Actual Yes Rate” column is what actually happened. The closer those two numbers are to each other, the better-calibrated the market. PredictWire’s archive shows:

  • When Polymarket traders collectively priced a contract at 90-100% Yes, the market was almost unanimous and right — mean prediction 98.2%, actual outcome rate 99.3% (n=146).
  • When they priced contracts at 0-10% Yes, the “no” outcome resolved 98.8% of the time across 1,086 markets.
  • The middle of the distribution is noisier but still tight — the 70-80% bucket had mean predicted 74.4% vs actual 75.0%.

Accuracy vs Time-to-Resolution

A common assumption is that prediction markets get sharper as resolution approaches. The data partially disagrees. Here is Brier score and directional accuracy across 194,111 daily snapshots, grouped by how far each snapshot was from market resolution:

Days Until Resolution Snapshots Brier Score MAE Directional
0-1d 1,080 0.1037 0.2001 82.6%
1-7d 6,788 0.1111 0.2139 81.9%
7-30d 20,969 0.0890 0.1679 87.2%
30-90d 45,881 0.0738 0.1395 89.4%
90-180d 61,686 0.0662 0.1259 90.8%
180-365d 52,829 0.0750 0.1425 89.0%
365d+ 4,878 0.1540 0.2405 75.4%

The counter-intuitive finding: prediction markets are most accurate 90-180 days before resolution, not at the very end. The 0-1 day window has a higher Brier score than the 30-90 day window. Why? Survivor bias. Easy calls — where the market is already at 0.99 or 0.01 — effectively resolve in advance and exit the active price series. The contracts that are still being traded at the last minute are the ones that were genuinely hard, and that uncertainty shows up in the Brier. Read the 0-7 day bucket as “the residual hard markets” rather than “the final state of all markets.”

Where Markets Excel, Where They Struggle

Not all market categories are equal. Categorized results:

Category Markets Volume Brier Directional
economy 68 $2,807.8M 0.0064 100.0%
sports 226 $4,323.0M 0.0339 95.1%
politics 438 $10,329.5M 0.0419 93.2%
crypto 182 $1,339.5M 0.0568 90.7%
geopolitics 141 $2,139.4M 0.1205 82.3%
other 856 $9,517.8M 0.1305 77.8%

Best-calibrated class: economy/macro (Fed decisions, CPI, GDP) — these are events where consensus is usually clear and market prices converge to near-certainty. Worst-calibrated class: geopolitics — war, peace deals, regime change. This is intuitive. Geopolitics is hard. Our dataset shows the market knows it too: geopolitical markets have a Brier roughly 3x that of politics and sports.

When Markets Get It Wrong

Calibration is not perfection. Among 1,341 confident predictions, 39 resolved against the market. Some of the most expensive wrong confident calls in the archive:

  • “Khamenei out as Supreme Leader of Iran by February 28?” — closed at 1.5% Yes. Resolved Yes. $131M volume. Brier 0.969.
  • “Government shutdown on Saturday?” — closed at 96.5% Yes. Resolved No. $13.6M volume. Brier 0.930.
  • “Trump x Ukraine mineral deal signed before May?” — closed at 1.8% Yes. Resolved Yes. $6.8M volume. Brier 0.964.

If you trade prediction markets, these are exactly the kinds of surprise outcomes that make or unmake P&L. They are rare. But they happen, and we log them honestly.

Methodology (Short Version)

Every metric on this page is reproducible from the raw data linked below. In brief:

  1. Universe: All Polymarket markets with status=closed, a clean 0/1 resolution, exactly two outcomes (binary Yes/No), and at least $100,000 in lifetime volume. Pulled via the public gamma-api on April 21, 2026.
  2. Price history: Daily mid-price time series for each market, from Polymarket’s CLOB /prices-history?interval=max&fidelity=1440 endpoint. Each market contributes one closing-price snapshot (last trade at least 1 hour before market end) plus one snapshot per trading day to the per-day series.
  3. Brier score: mean((predicted_probability - actual_outcome)²). Outcome is 1.0 if Yes resolved, 0.0 otherwise. Lower is better. A naive “always 50%” forecaster scores 0.25; the base-rate forecaster scores 0.190.
  4. Reliability buckets: Markets grouped into ten deciles by closing price. Within each bucket, we compute mean predicted probability and mean actual Yes rate. A perfectly-calibrated market has a zero gap in every bucket.
  5. Horizon analysis: Each (market, day) snapshot is assigned a days-to-resolution and pooled into exponential buckets. Brier is computed within each bucket.

Full methodology, including data cleaning decisions and edge-case handling, is on our methodology page. Every market and every metric can be recomputed from the CSV and JSON below.

Download the Data

📊 Download the full dataset: markets_resolved.csv (25.4% base rate, 1,914 markets). Summary JSON.

The CSV contains one row per resolved market: id, question, slug, end date, volume, category, closing price, outcome (1 or 0), absolute error, Brier, and history-point count. All columns are plain-text and machine-readable. The JSON contains the complete calibration summary including reliability buckets and horizon tables. Use either in your own research, modeling, or reporting. Please cite PredictWire and link back to this page when you do.

The PredictWire Take

This archive is what prediction markets are, not what we wish they were. The headline: the crowd is usefully accurate — much more accurate than most individual forecasters — but it is not infallible. Markets are best-calibrated several months out, noisy in the final weeks of hard contests, and reliably excellent on macro questions where prices converge to unanimity.

PredictWire will update this archive quarterly, adding resolved markets from Polymarket, Kalshi, and PredictIt as they settle. We will publish the full transparent diff between runs. This is the data foundation we are building PredictWire on.

Trade on the Markets We Track

Disclosure: PredictWire earns a commission on qualifying accounts opened through the links below. Our rankings and reviews are not influenced by these relationships. Full disclosure.


About this page: Written and maintained by The PredictWire Research Team under our Editorial Standards. Next scheduled update: July 2026. Corrections and data requests: corrections@predictwire.io. Prediction market contracts carry risk of total loss. Nothing here is financial advice.