Independent Research
February 19th, 2026, version 0.1
We present a statistical reverse-engineering of the Best-of-One opening hand algorithm used in Magic: The Gathering Arena. Through analysis of 4,844 opening hands collected from real Arena games, we demonstrate that the algorithm draws three candidate hands and selects among them using a Gaussian weighting function centered on the deck's expected land count, with a fitted width parameter . The model achieves a goodness-of-fit of , against observed data, and is statistically indistinguishable from the real algorithm across all tested deck compositions.
Magic: The Gathering Arena (MTGA) employs a hand smoothing algorithm in its Best-of-One (Bo1) game mode that modifies the distribution of opening hands relative to the purely random hypergeometric draw used in Best-of-Three (Bo3) play. While the existence of this algorithm has been publicly acknowledged by Wizards of the Coast, its precise mechanism has not been formally disclosed.
"[The shuffler] looks at multiple opening hands and leans toward selecting the one that most closely matches the land-to-spell ratio of your deck."
— Ian Adams, Product Owner of Card Set, MTG Arena
This statement, while informative, leaves open several critical questions: How many candidate hands are drawn? What weighting function governs the selection? How strong is the bias? In this paper, we answer these questions through statistical analysis of 4,844 real opening hands, demonstrating that a Gaussian-weighted three-hand selection model fits the observed data with remarkable precision.
Our analysis reveals that Arena's Bo1 hand smoothing operates through the following four-step procedure:
Step 1. Draw three independent opening hands of seven cards each from the shuffled deck.
Step 2. Compute the ideal number of lands for the opening hand. For a deck containing lands in a deck of cards:
For a standard 24-land, 60-card deck, this yields lands.
Step 3. Assign each hand a weight using a Gaussian function based on the number of lands in that hand:
where is the fitted width parameter. A hand matching the ideal perfectly receives the maximum weight of 1.0. Being one land away from ideal reduces the weight to approximately 0.21. Being two lands away yields a weight near zero.
Step 4. Select one of the three hands with probability proportional to its weight. The hand closest to the ideal land count is most likely to be chosen, but selection is not deterministic — this is the "finger on the scale" described by Arena's developers.
Let denote the hypergeometric probability of drawing exactly lands in a 7-card hand from a deck of cards containing lands. The probability that the algorithm selects a hand with lands is given by the triple summation over all possible combinations of three independently drawn hands:
This formulation accounts for all possible triples of land counts across the three candidate hands, weighted by their respective hypergeometric probabilities and Gaussian selection weights.
The following interactive tool illustrates how the Gaussian weights and resulting hand probabilities vary with deck composition. Adjust the number of lands to observe the effect on the selection distribution.
Figure 1: Interactive visualization of Gaussian weights and selection probabilities for varying deck compositions.
Data source. We collected 4,844 Best-of-One opening hands (including mulliganed hands) from real Arena games via the untapped.gg public API. Deck compositions were extracted from full game replays.
Control group. Best-of-Three hands were verified to fit the pure hypergeometric distribution (), confirming data quality and that our extraction pipeline introduces no systematic bias.
Statistical test. We employed the goodness-of-fit test with bin merging: any bins with expected count less than 5 were merged with adjacent bins to ensure test validity.
Models tested and rejected. We evaluated deterministic pick-closest (2-hand and 3-hand variants), inverse-distance weighting, softmax weighting, and various reroll-if-outside-range models. All were rejected at for at least one land count group.
Best fit. The Gaussian-weighted 3-hand model was identified through systematic parameter search, with optimized via maximum likelihood estimation. The model was validated independently across all deck compositions with 50 or more samples.
Table 1 presents the goodness-of-fit results for the Gaussian model () across all land counts in our sample. The model achieves an overall fit of , , indicating the model is statistically indistinguishable from the observed data.
| Lands | n | Gaussian | Gaussian | Random | Fit |
|---|
Table 1: Goodness-of-fit results for the Gaussian hand smoothing model across deck compositions. Daggers (†) mark land counts discussed in Section 5.1.
Two land counts — 21 and 25 — show marginal fits with and respectively. Several factors explain why these results do not undermine the model:
Small sample sizes. These are the two smallest sample sizes in our dataset ( and ), roughly 3–6× smaller than the well-fitting land counts. Smaller samples are inherently more susceptible to random fluctuation and compositional bias.
Multiple testing correction. With 8 independent hypothesis tests at , we would expect approximately false rejections by chance alone. Observing 2 marginal rejections out of 8 is within the range of normal statistical fluctuation. Applying the Bonferroni correction, the adjusted significance threshold becomes . Under this correction, only the 25-land result () is borderline, and the 21-land result () would not be rejected.
Atypical deck compositions. Land counts of 21 and 25 are uncommon in competitive play — most decks run 20, 22–24, or 26 lands. The small samples likely represent a narrow subset of deck archetypes, making them more susceptible to compositional bias from a non-representative mix of strategies.
The following expandable sections present the observed distribution compared to the Gaussian model and pure random (hypergeometric) predictions for each deck land count:
The Gaussian-weighted three-hand model provides a parsimonious explanation of Arena's hand smoothing behavior. Several aspects merit discussion:
The algorithm represents a gentle nudge rather than a deterministic guarantee. For a 24-land deck, the hand closest to the ideal wins approximately 53% of the time versus 47% for the next-best candidate. This design preserves meaningful variance in opening hands while reducing the frequency of extreme mana screw and flood.
The smoothing affects only the opening hand, not subsequent draws. Adding a 61st card does not "trick" the system — it merely shifts the land ratio slightly, producing a correspondingly small change in the ideal land count .
The tight fit of the Gaussian function () means that the weighting is quite aggressive: hands more than one land away from ideal are strongly disfavored. This explains the dramatic difference between Bo1 and Bo3 land distributions observed by players.
The rejection of the pure random model is overwhelming (, ), confirming that hand smoothing is real and significant in Bo1 play.
Through analysis of 4,844 opening hands from real MTG Arena games, we have reverse-engineered the Bo1 hand smoothing algorithm with high confidence. The key findings are:
We provide reference implementations in both Python and JavaScript.
Listing 1: Python implementation of the Gaussian-weighted hand smoothing model.
Listing 2: JavaScript implementation of the Gaussian-weighted hand smoothing model.