Education

Why Fair Value Beats Consensus

A naive median across sportsbooks weights every book equally. BaseCase's hybrid fair value blends 60% Pinnacle with 40% cluster-aware weighted consensus, preventing shared B2B platforms from dominating.

April 21, 2026 · 6 min readfair-valuemethodologyadvanced

Fair value is BaseCase's estimate of the true probability of an outcome, expressed as a probability or as the payout that would compensate that probability with zero edge. It is the anchor against which book prices are evaluated. Edge detection, expected value, and Kelly sizing all depend on it. Bias in fair value propagates into every downstream signal, so the methodology earns the most attention.

Why median consensus is insufficient

A naive method takes the median of de-vigged book prices on a given outcome and calls that fair value. The problem is structural: many sportsbooks share pricing engines or copy from each other. Aggregating across "books" that are really the same underlying source weights that source disproportionately. Out-of-sample backtests on a 1M+ moneyline dataset showed median consensus produces near-coinflip win rates on bets it identifies as +EV — roughly 49.5%, indistinguishable from random.

The reason: the books most likely to be on the wrong side of a sharp move are the same books that move slowest, and those books dominate the median by sheer count. Naive aggregation rewards what is, in effect, the loudest voice in a room of recently-quoted prices.

Pinnacle Hybrid

Pinnacle is widely accepted as the sharpest available source — its prices reflect serious volume from professional bettors and re-price quickly toward true value. BaseCase's primary fair-value methodology, labeled pinnacle_hybrid in the fair_source field and rendered as the PH badge in the Edge Finder, blends:

  • 60% Pinnacle's de-vigged price
  • 40% cluster-aware weighted consensus across the remaining sharp books

The 40% term is not a simple secondary median. It uses hierarchical clustering (Mantegna 1999) to identify books that share pricing engines or move in lockstep, treating each cluster as a single voice rather than as N independent observations. Within each cluster, the book that historically reacts fastest to information gets the highest weight. This produces a consensus signal that captures sharp-book dispersion without double-counting the slow re-quoters.

When Pinnacle is unavailable for a market — a non-priced sport, a temporary feed gap — BaseCase falls back to weighted blend (weighted_blend, WB) or median consensus (median_consensus, MC) and labels the row accordingly. The badge tells you which methodology produced the number you're acting on.

Power-method de-vigging

Books quote prices that include a margin (the vig). To compare prices across books, you need to remove the vig and recover the implied probabilities. The simplest method (proportional de-vigging) divides each price's implied probability by the sum of all sides' implied probabilities. This is biased on uneven markets: it overstates favorites and understates longshots.

BaseCase uses the power method (Clarke et al. 2017): solve for an exponent k such that the de-vigged probabilities sum to 1. The power method preserves the relative pricing of long shots versus favorites in the way the book originally intended, which matters most on moneylines with extreme prices. The de-vigged probabilities feed into both the Pinnacle Hybrid blend and the cluster-weighted consensus.

A subtle point: vig cancellation

There is a quirk worth flagging: BaseCase does not apply isotonic favorite-longshot-bias (FLB) correction to fair value used for edge detection. Empirically, the FLB structure in the de-vigged Pinnacle price partially offsets the vig structure in soft-book prices. Subtracting both produces a smaller exploitable gap than leaving them in. The edge detection pipeline is calibrated against the uncorrected fair value because that is the object that actually predicts realized win rates on this dataset. This is counterintuitive but borne out in the backtests; full treatment is the subject of a separate research piece.

How fair value appears in the UI

Every row in the Edge Finder shows a small two-letter mono badge (PH, WB, MC) next to the fair-probability percentage. The badge tells you the source. PH is the strongest signal; MC is the weakest and appears as a fallback. Edges sourced from MC should be treated with proportionally more skepticism — they're the rows where Pinnacle wasn't available to anchor the estimate.

Fair value is not a number you compute once. It is a methodology you commit to. The badge on every BaseCase row tells you which one was used.

Caveats

The 60/40 blend is a single calibration point in a continuous design space. Different ratios perform better on different sports, on different game states, and at different times relative to event start. BaseCase ships one production blend; refinement is ongoing. Markets without active Pinnacle pricing — most prediction-market events, niche props, deep futures — fall back to weighted blend or consensus, and edge detection on those markets is correspondingly less reliable.

Further reading