The Law of Large Numbers in Coin Flipping

Why ten heads in a row doesn't mean a "broken" coin, why the heads-percentage drifts toward 50% only after many flips, and how to use this when running probability experiments.

Last reviewed: April 25, 2026

The idea, in plain English

The law of large numbers says that as you repeat a random process more and more times, the proportion of each outcome gets closer and closer to its true probability. For a fair coin: keep flipping, and the share of heads will home in on 50%.

Two things this does not say:

  • It does not say each flip is more or less likely to be heads or tails depending on what came before. Each flip is still 50/50, every time.
  • It does not say the absolute number of heads will catch up with the absolute number of tails. After a million flips you might be 1,000 heads "ahead" — and that's perfectly normal. The proportion is what converges.

Why short sequences look wild

If you only flip a few times, expect lopsided results. Variance — the spread of plausible outcomes around the average — is wide when you have few flips, and narrows as the number of flips grows. A useful rough guide:

Number of flips Plausible heads-% range (≈ 95% of trials) What "lopsided" looks like
10~ 20% – 80%8 of one side
50~ 36% – 64%32 of one side
100~ 40% – 60%60 of one side
1,000~ 47% – 53%530 of one side
10,000~ 49% – 51%5,100 of one side

The ranges are approximate (derived from the standard deviation of a binomial distribution with p = 0.5), but the pattern is the point: variance shrinks as the square root of the number of flips. Quadrupling your flips only halves the spread.

Practical takeaway: if you flip the simulator ten times and get 70% heads, that's well inside normal. To detect a real bias of, say, 1%, you'd need on the order of tens of thousands of flips — not dozens.

Worked example using the on-site stats

The four counters under the coin track total flips, heads, tails, and heads-percentage. Try this:

  1. Reset the counters.
  2. Flip 20 times and write down the heads-percentage.
  3. Flip another 80 times (total 100). Write down the new heads-percentage.
  4. Flip another 400 (total 500). Write down again.

You'll usually see something like 65% → 54% → 50.6%. Each step has a smaller swing than the last. That trajectory is the law of large numbers — not because the coin is "correcting", but because the few flips at the start matter less and less to the running average as you add more flips on top.

Streaks: how long is too long?

People are bad at intuiting how long a "normal" streak is. A useful rule of thumb: in n independent fair flips, expect the longest run of one outcome to be roughly log₂(n) in length. So:

  • In 64 flips, expect a longest run of about 6.
  • In 1,024 flips, expect a longest run of about 10.
  • In a million flips, expect a longest run of about 20.

If you see a run of seven heads in 100 flips, you have not witnessed an anomaly. You have witnessed a normal Tuesday. We cover the related "the next flip is due" misconception in the probability of a coin flip.

Common mistakes

  • Quitting early and declaring the coin biased. Twenty flips is not enough data to detect anything but a wildly broken coin.
  • Confusing "approaches 50%" with "equals 50%". The law promises convergence, not equality. Heads counts and tails counts can drift apart in absolute terms while the percentage drifts toward 50.
  • Adjusting your "expected" outcome based on streaks. The next flip after a long streak is still 50/50.
  • Reading the heads-percentage too often. Watching a counter update each flip exaggerates the feeling of "imbalance". The actual variance is much smaller than visual intuition suggests.

Where this fails: weighted coins

Everything above assumes a fair coin. If a coin is biased — say, it lands heads 55% of the time — the law of large numbers still works, but it converges to the true probability (55%), not to 50%. So if your simulator counter sat at, say, 56% heads after 50,000 flips, the most likely explanation isn't that the law of large numbers has failed; it's that the coin isn't fair. The Coin Toss Simulator uses an unbiased mapping from a uniform random source, so this scenario shouldn't occur — but the same logic applies to any random process you analyse.

Why this matters beyond coin flipping

Coin flipping is a teaching example because it's binary and the maths is clean. The same principle governs casino games, polling, A/B testing, and almost every other situation where you want to estimate a probability from a sample. The lesson is the same: small samples are noisy, big samples are precise, and the noise shrinks slower than you'd guess.