Arbitrary distribution using biased coin

Problem. Let’s assume that you have a biased coin, but you don’t know the probabilities of its sides. Using the coin, how would you choose one of the numbers 1, 2 or 3 with equal probability?

Stop reading here and try solving the problem yourself before moving on.

Solution. The problem becomes easier if we split it into two parts. Firstly we will learn how to emulate a fair coin using a biased one. After this we describe a procedure which allows us to get numbers from an arbitrary probabilistic distribution.

Let’s denote the probability of tails as \(p\). Then if we toss the coin twice, we can get one of the four outcomes:

  1. Tails both times, with probability \(p^2\),
  2. Heads both times, with probability \((1-p)^2\),
  3. Tails first time, heads second time, probability \(p(1-p)\).
  4. Heads first time, tails second time, probability \(p(1-p)\).

As you see, outcomes 3 and 4 have equal chances to appear. Thus, in order to emulate an experiment with probability 50/50 (fair coin), we can toss the coin twice. If both tosses give the same outcome, then discard it and toss again. When we get tails in one toss and heads in the other, we choose the outcome which comes first.

Example. Consider a sequence of tosses T, T, H, H, T, H, where I denoted tails as T and heads as H. This sequence describes an experiment in which we toss the coin twice and both times we get tails. We discard this result and toss twice again. This time we get two heads. Now, we toss twice one more time. First, we get tails and then heads. We write “tails” as our outcome.

Since now we can assume that we have a fair coin. We will be repeating tosses sequentially and interpreting them as digits in a binary representation of a number \(s\) in interval \([0, 1]\):

$$s=\sum\limits_{i=1}^{+\infty} \frac{t_i}{2^i}$$

Here \(t_i\) denotes the outcome of i-th toss and can be either 0 or 1. In order to get one of the three values at random, we split \([0,1]\) into three subintervals \([0, 1/3), [1/3, 2/3], (2/3, 1]\). (Since particular points like 1/3 has zero probability, we don’t care into which interval we include them). The outcome of the experiment is the number of the subinterval in which \(s\) lies. Since we assume that each digit of \(s\) is uniform, \(s\) itself is uniform in \([0,1]\).

In fact we don’t need the exact value of \(s\). The only interesting bit is the subinterval in which \(s\) lies, so we don’t have to make an infinite number of tosses. Let’s say that the first toss gave us value 1. We know that \(s=0.1\ldots\) (in binary) and it lies somewhere in \([1/3, 2/3]\). If the next toss gives us 0, then \(s=0.10\ldots\) and \(s\in[1/3, 0.59]\). If the next toss gives us 0 again, then \(s=0.100\ldots\) and we know that \(s\in[1/3, 0.46)\) which is a subinterval of \([1/3, 2/3]\). The outcome is the second value.

This scheme can be generalized in an obvious way. If we want to get an arbitrary distribution we just have to split \([0,1]\) into a given number of subintervals of lengths that match the probabilities of the values.

Leave a Reply

Your email address will not be published. Required fields are marked *