๐Ÿ“Š Model Accuracy

How well do our edge predictions match actual outcomes? We grade ourselves so you don't have to.

๐Ÿ“ก Collecting Data

Our improved models went live March 1, 2026

0 of ~50 predictions needed for first score ยท Updating daily

What is a Brier Score?

The Brier Score measures how close our probability estimates are to what actually happens. Lower is better โ€” a perfect score is 0.000, and random guessing scores 0.250. For prediction markets โ€” where you're comparing two imperfect probability sources โ€” scores below 0.20 are considered strong.

It's the same metric used by election forecasters (FiveThirtyEight, The Economist) and professional weather services to grade prediction accuracy.

A < 0.13 (Excellent) B < 0.20 (Strong) C < 0.25 (Competitive) D < 0.30 (Needs Work) F ≥ 0.30 (Poor)

Our Methodology

EdgeScouts compares prices across prediction markets (Polymarket) against independent probability sources โ€” Pinnacle sportsbook odds, weather forecasts, options-implied probabilities, and economic consensus data.

When our models detect a significant divergence (edge), we surface it on the dashboard. The Brier Score tracks how often our "fair value" estimates match reality after the event resolves.

Scores recalculate daily. As our models improve, you'll see grades trend upward over time.

Last updated: 2026-03-01T12:16:00