An introduction to Bayes' rule – Chapter 1

This tutorial is taken from Chapter 1 of the book Bayes' rule: A tutorial introduction to Bayesian analysis which you can download on the book page. The book page includes a table of contents and computer code in MatLab, Python and R.Mathematical formulas on this page are best viewed on desktop and laptop screen sizes.

"We balance probabilities and choose the most likely. It is the scientific use of the imagination..."

Sherlock Holmes, The Hound of the Baskervilles

A C Doyle (1901)

On this page

Introduction

Bayes' rule is a rigorous method for interpreting evidence in the context of previous experience or knowledge. It was discovered by Thomas Bayes (c. 1701-1761), and independently discovered by Pierre-Simon Laplace (1749-1827).

After more than two centuries of controversy, during which Bayesian methods have been both praised and pilloried, Bayes' rule has recently emerged as a powerful tool with a wide range of applications, which include

  • genetics2

  • linguistics12

  • image processing15

  • brain imaging33

  • cosmology17

  • machine learning5

  • epidemiology26

  • psychology31;44

  • forensic science43

  • human object recognition22

  • evolution13

  • visual perception23;41

  • ecology32

  • and even the work of the fictional detective Sherlock Holmes21

Historically, Bayesian methods were applied by Alan Turing to the problem of decoding the German enigma code in the Second World War, but this remained secret until recently16;29;37.

Drawn portraits Bayes and Laplace

Figure 1.1. The fathers of Bayes' rule. Thomas Bayes (left, c. 1701-1761). Pierre-Simon Laplace (right, 1749-1827).

In order to appreciate the inner workings of any of the above applications, we need to understand why Bayes' rule is useful, and how it constitutes a mathematical foundation for reasoning. We will do this using a few accessible examples, but first, we will establish a few ground rules, and provide a reassuring guarantee.

Ground rules

In the examples in this chapter, we will not delve into the precise meaning of probability, but will instead assume a fairly informal notion based on the frequency with which particular events occur. For example, if a bag contains 40 white balls and 60 black balls then the probability of reaching into the bag and choosing a black ball is the same as the proportion of black balls in the bag (ie 60/100=0.6).

From this, it follows that the probability of an event (eg choosing a black ball) can adopt any value between zero and one, with zero meaning it definitely will not occur, and one meaning it definitely will occur. Finally, given a set of mutually exclusive events, such as the outcome of choosing a ball, which has to be either black or white, the probabilities of those events have to add up to one (eg 0.4+0.6 = 1). We explore the subtleties of the meaning of probability in Section 7.1.

A guarantee

Before embarking on these examples, we should reassure ourselves with a fundamental fact regarding Bayes' rule, or Bayes' theorem, as it is also called; Bayes' theorem is not a matter of conjecture. By definition, a theorem is a mathematical statement that has been proved to be true. This is reassuring because, if we had to establish the rules for calculating with probabilities, we would insist that the result of such calculations must tally with our everyday experience of the physical world, just as surely as we would insist that 1+1=2.

Indeed, if we insist that probabilities must be combined with each other in accordance with certain common sense principles then Cox (1946) showed that this leads to a unique set of rules, a set which includes Bayes' rule, which also appears as part of Kolmogorov's (1933)²⁴ (arguably, more rigorous) theory of probability.

1.1. Example 1: Poxy diseases

The patient's perspective

Suppose that you wake up one day with spots all over your face, as in Figure 1.2. The doctor tells you that 90% of people who have smallpox have the same symptoms as you have. In other words, the probability of having these symptoms given that you have smallpox is 0.9 (ie 90%). As smallpox is often fatal, you are naturally terrified.

However, after a few moments of contemplation, you decide that you do not want to know the probability that you have these symptoms (after all, you already know you have them). Instead, what you really want to know is the probability that you have smallpox.

So you say to your doctor, "Yes, but what is the probability that I have smallpox given that I have these symptoms?". "Ah", says your doctor, "a very good question." After scribbling some equations, your doctor looks up. "The probability that you have smallpox given that you have these symptoms is 1.1%, or equivalently, 0.011."

Of course, this is not good news, but it sounds better than 90%, and (more importantly) it is at least useful information. This demonstrates the stark contrast between the probability of the symptoms given a disease (which you do not want to know) and the probability of the disease given the symptoms, (which you do want to know).

Figure 1.2. Thomas Bayes diagnosing a patient. Chickenpox or smallpox?

Bayes' rule transforms probabilities that look useful (but are often not), into probabilities that are useful. In the above example, the doctor used Bayes' rule to transform the uninformative probability of your symptoms given that you have smallpox into the informative probability that you have smallpox given your symptoms.

The doctor's perspective

Now, suppose you are a doctor, confronted with a patient who is covered in spots. The patient's symptoms are consistent with chickenpox, but they are also consistent with another, more dangerous, disease, smallpox. So you have a dilemma. You know that 80% of people with chickenpox have spots, but also that 90% of people with smallpox have spots. So the probability (0.8) of the symptoms given that the patient has chickenpox is similar to the probability (0.9) of the symptoms given that the patient has smallpox (see Figure 1.2).

If you were a doctor with limited experience then you might well think that both chickenpox and smallpox are equally probable. But, as you are a knowledgeable doctor, you know that chickenpox is common, whereas smallpox is rare. This knowledge, or prior information, can be used to decide which disease the patient probably has. If you had to guess (and you do have to guess because you are the doctor) then you would combine the possible diagnoses implied by the symptoms with your prior knowledge to arrive at a conclusion (ie that the patient probably has chickenpox). In order to make this example more tangible, lets run through it again, this time with numbers.

The doctor's perspective (with numbers)

We can work out probabilities associated with a disease by making use of public health statistics. Suppose doctors are asked to report the number of cases of smallpox and chickenpox, and the symptoms observed. Using the results of such surveys, it is a simple matter to find the proportion of patients diagnosed with smallpox and chickenpox, and each patient's symptoms (eg spots). Using these data, we might find that the probability that a patient has spots given that the patient has smallpox is 90% or 0.9. We can write this in an increasingly succinct manner using a special notation

p(symptoms are spots | disease is smallpox) = 0.9,

(1.1)

where the letter p stands for probability, and the vertical bar | stands for "given that". So, this short-hand statement should be read as:

"the probability that the patient's symptoms are spots given that he has smallpox is 90% or 0.9".

The vertical bar indicates that the probability that the patient has spots depends on the presence of smallpox. Thus, the probability of spots is said to be conditional on the disease under consideration. For this reason, such probabilities are known as conditional probabilities. We can write this even more succinctly as

p(spots|smallpox) = 0.9.

(1.2)

Similarly, we might find that spots are observed in 80% of patients who have chickenpox, which is written as

p(spots|chickenpox) = 0.8.

(1.3)

Equations 1.2 and 1.3 formalise why we should not use the symptoms alone to decide which disease the patient has. These equations take no account of our previous experience of the relative prevalence of smallpox and chickenpox, and are based only on the observed symptoms. As we shall see later, this is equivalent to making a decision based on the (in this case, false) assumption that both diseases are equally prevalent in the population, and that they are therefore a priori equally probable.

Note that the conditional probability p(spots|smallpox) is the probability of spots given that the patient has smallpox, but it is called the likelihood of smallpox (which is confusing, but standard, nomenclature). In this example, the disease smallpox has a larger likelihood than chickenpox. Indeed, as there are only two diseases under consideration, this means that, of the two possible alternatives, smallpox has the maximum likelihood. The disease with the maximum value of likelihood is known as the maximum likelihood estimate (MLE) of the disease that the patient has. Thus, in this case, the MLE of the disease is smallpox.

As discussed above, it would be hard to argue that we should disregard our knowledge or previous experience when deciding which disease the patient has. But exactly how should this previous experience be combined with current evidence (eg symptoms)? From a purely intuitive perspective, it would seem sensible to weight the likelihood of each disease according to previous experience of that disease, as in Figure 1.3.

Since smallpox is rare, and is therefore intrinsically improbable, it might be sensible to weight the likelihood of smallpox by a small number. This would yield a small "weighted likelihood", which would be a more realistic estimate of the probability that the patient has smallpox. For example, public health statistics may inform us that the prevalence of smallpox in the general population is 0.001, meaning that there is a one in a thousand chance that a randomly chosen individual has smallpox. Thus, the probability that a randomly chosen individual has smallpox is

Figure 1.3. Schematic representation of Bayes' rule. Data, in the form of symptoms, are used find a likelihood, which is the probability of those symptoms given that the patient has a specific disease. Bayes' rule combines this likelihood with prior knowledge, and yields the posterior probability that the patient has the disease given that he has the symptoms observed.

p(smallpox) = 0.001

(1.4)

This represents our prior knowledge about the disease in the population before we have observed our patient, and is known as the prior probability that any given individual has smallpox. As our patient (before we have observed his symptoms) is as likely as any other individual to have smallpox, we know that the prior probability that he has smallpox is 0.001.

If we follow our common sense prescription, and simply weight (ie multiply) each likelihood by its prior probability then we obtain "weighted likelihood" quantities which take account of the current evidence and of our prior knowledge of each disease. In short, this common sense prescription leads to Bayes' rule. Even so, the equation for Bayes' rule given below is not obvious, and should be taken on trust for now. In the case of smallpox, Bayes' rule is

(1.5)

The term p(spots) in the denominator of Equation 1.5 is the proportion of people in the general population that have spots, and therefore represents the probability that a randomly chosen individual has spots. As will be explained on p. 15, this term is often disregarded, but we use a value that makes our sums come out neatly, and assume that p(spots) = 0.081 (ie 81 in every 1,000 individuals has spots). If we now substitute numbers into this equation then we obtain

p(smallpox|spots) = 0.9 × 0.001/0.081

(1.6)

= 0.011,

(1.7)

which is the conditional probability that the patient has smallpox given that his symptoms are spots.

Crucially, the "weighted likelihood" p(smallpox|spots) is also a conditional probability, but it is the probability of the disease smallpox given the symptoms observed, as shown in Figure 1.4. So, by making use of prior experience, we have transformed the conditional probability of the observed symptoms given a specific disease (the likelihood, which is based only on the available evidence) into a more useful conditional probability: the probability that the patient has a particular disease (smallpox) given that he has particular symptoms (spots).

In fact, we have just made use of Bayes' rule to convert one conditional probability, the likelihood p(spots|smallpox) into a more useful conditional probability, which we have been calling a "weighted likelihood", but is formally known as the posterior probability p(smallpox|spots).

As noted above, both p(smallpox|spots) and p(spots|smallpox) are conditional probabilities, which have the same status from a mathematical viewpoint. However, for Bayes' rule, we treat them very differently.

The conditional probability p(spots|smallpox) is based only on the observed data (symptoms), and is therefore easier to obtain than the conditional probability we really want, namely p(smallpox|spots), which is also based on the observed data, but also on prior knowledge. For historical reasons, these two conditional probabilities have special names. As we have already seen, the conditional probability p(spots|smallpox) is the probability that a patient has spots given that he has smallpox, and is known as the likelihood of smallpox. The complementary conditional probability p(smallpox|spots) is the posterior probability that a patient has smallpox given that he has spots.

In essence, Bayes' rule is used to combine prior experience (in the form of a prior probability) with observed data (spots) (in the form of a likelihood) to interpret these data (in the form of a posterior probability). This process is known as Bayesian inference.

The perfect inference engine

Bayesian inference is not guaranteed to provide the correct answer. Instead, it provides the probability that each of a number of alternative answers is true, and these can then be used to find the answer that is most probably true. In other words, it provides an informed guess. While this may not sound like much, it is far from random guessing. Indeed, it can be shown that no other procedure can provide a better guess, so that Bayesian inference can be justifiably interpreted as the output of a perfect guessing machine, a perfect inference engine (see Section 4.9, p. 92). This perfect inference engine is fallible, but it is provably less fallible than any other.

Figure 1.4. Comparing the probability of chickenpox and smallpox using Bayesian inference. The observed symptoms x seem to be more consistent with smallpox θs than chickenpox θc, as indicated by their likelihood values. However, the background rate of chickenpox in the population is higher than that of smallpox, which, in this case, makes it more probable that the patient has chickenpox, as indicated by its higher posterior probability.

Making a diagnosis

In order to make a diagnosis, we need to know the posterior probability of both of the diseases under consideration. Once we have both posterior probabilities, we can compare them in order to choose the disease that is most probable given the observed symptoms.

Suppose that the prevalence of chickenpox in the general population is 10% or 0.1. This represents our prior knowledge about chickenpox before we have observed any symptoms, and is written as

p(chickenpox) = 0.1,

(1.8)

which is the prior probability of chickenpox. As was done in Equation 1.6 for smallpox, we can weight the likelihood of chickenpox with its prior probability to obtain the posterior probability of chickenpox

(1.9)

The two posterior probabilities, summarised in Figure 1.4, are therefore

p(smallpox|spots) = 0.011

(1.10)

p(chickenpox|spots) = 0.988.

(1.11)

Thus, the posterior probability that the patient has smallpox is 0.011, and the posterior probability that the patient has chickenpox is 0.988. Aside from a rounding error, these sum to one.

Notice that we cannot be certain that the patient has chickenpox, but we can be certain that there is a 98.8% probability that he does. This is not only our best guess, but it is provably the best guess that can be obtained; it is effectively the output of a perfect inference engine.

In summary, if we ignore all previous knowledge regarding the prevalence of each disease then we have to use the likelihoods to decide which disease is present. The likelihoods shown in Equations 1.2 and 1.3 would lead us to diagnose the patient as probably having smallpox. However, a more informed decision can be obtained by taking account of prior information regarding the diseases under consideration.

When we do take account of prior knowledge, Equations 1.10 and 1.11 indicate that the patient probably has chickenpox. In fact, these equations imply that the patient is about 89 (=0.988/0.011) times more likely to have chickenpox than smallpox. As we shall see later, this ratio of posterior probabilities plays a key role in Bayesian statistical analysis (Section 1.1, p. 14).

Taking account of previous experience yields the diagnosis that is most probable, given the evidence (spots). As this is the decision associated with the maximum value of the posterior probability, it is known as the maximum a posteriori or MAP estimate of the disease.

The equation used to perform Bayesian inference is called Bayes' rule, and in the context of diagnosis is

(1.12)

which is easier to remember as

(1.13)

The marginal likelihood is also known as evidence, and we shall have more to say about it shortly.

Bayes' rule: hypothesis and data

If we consider a putative disease to represent a specific hypothesis, and the symptoms to be some observed data then Bayes' rule becomes

where the word "hypothesis" should be interpreted as, "hypothesis is true". Written in this form, the contrast between the likelihood and the posterior probability is more apparent. Specifically, the probability that the proposed hypothesis is true given some data that were actually observed is the posterior probability

p(hypothesis|data),

(1.14)

whereas the probability of observing the data given that the hypothesis is true is the likelihood

p(data|hypothesis).

(1.15)

A more succinct notation

We now introduce a succinct, and reasonably conventional, notation for the terms defined above. There is nothing new in the mathematics of this section, just a re-writing of equations used above. If we represent the observed symptoms by x, and the disease by the Greek letter theta θs (where the subscript s stands for smallpox) then we can write the conditional probability (ie the likelihood of smallpox) in Equation 1.2

(1.6)

Similarly, the background rate of smallpox θs in the population can be represented as the prior probability

p(θs) = p(smallpox) = 0.001,

(1.17)

and the probability of the symptoms (the marginal likelihood) is

p(x) = p(spots) = 0.081.

(1.18)

Substituting this notation into Equation 1.5 (repeated here)


(1.19)

yields

(1.20)

Similarly, if we define

(1.21)

then we can re-write Equation 1.9 to obtain the posterior probability of chickenpox as

(1.22)

If we use θ without a subscript to represent any disease (or hypothesis), and x to represent any observed symptoms (or data) then Bayes' rule can be written as (we now drop the use of the × symbol)

(1.23)

Finally, we should note that smallpox made history by being the first disease to be eradicated from the Earth in 1979, which makes the prior probability of catching it somewhat less than the value p(θs) = 0.001 assumed in the above example.

Parameters and variables: Notice that there is nothing special about which symbol stands for disease and which for symptoms, and that we could equally well have used θ to represent symptoms, and x to represent diseases. However, it is common to use a Greek letter like θ to represent the thing we wish to estimate, and x to represent the evidence (eg symptoms) on which our estimated value of θ will be based. Similarly, using an equally arbitrary but standard convention, the symbol that represents the thing we wish to estimate is usually called a parameter (θ), whereas the evidence used to estimate that thing is usually called a variable (x).

Model selection, posterior ratios and Bayes factors

As noted above, when we take account of prior knowledge, it turns out that the patient is about 90 times more likely (ie 0.988 vs 0.011) to have chickenpox than smallpox. Indeed, it is often the case that we wish to compare the relative probabilities of two hypotheses (eg diseases). As each hypothesis acts as a (simple) model for the data, and we wish to select the most probable model, this is known as model selection, which involves a comparison using a ratio of posterior probabilities.

The posterior ratio, which is also known as the posterior odds between the hypotheses θc and θs, is

(1.24)

If we apply Bayes' rule to the numerator and denominator then

(1.25)

where the marginal likelihood p(x) cancels, so that

(1.26)

This is a product of two ratios, the ratio of likelihoods, or Bayes factor

(1.27)

and the ratio of priors, or prior odds between θc and θs, which is

(1.28)

Thus, the posterior odds can be written as

Rpost = B × Rprior

(1.29)

which, in words, is: posterior odds = Bayes factor × prior odds. In this example, we have

Note that the likelihood ratio (Bayes factor) is less than one (and so favours θs), whereas the prior odds is much greater than one (and favours θc), with the result that the posterior odds come out massively in favour of θc. If the posterior odds is greater than 3 or less than 1/3 (in both cases one hypothesis is more than 3 times more probable than the other) then this is considered to represent a substantial difference between the probabilities of the two hypotheses¹, so a posterior odds of 88.9 is definitely substantial.

Ignoring the marginal likelihood

As promised, we consider the marginal likelihood p(symptoms) or p(x) briefly here (and in Chapter 2 and Section 4.5). The marginal likelihood refers to the probability that a randomly chosen individual has the symptoms that were actually observed, which we can interpret as the prevalence of spots in the general population.

Crucially, the decision as to which disease the patient has depends only on the relative sizes of different posterior probabilities (eg Equations 1.10, 1.11, and in Equations 1.20, 1.22). Note that each of these posterior probabilities is proportional to 1/p(symptoms) in Equations 1.10, 1.11, also expressed as 1/p(x) in Equations 1.20,1.22. This means that a different value of the marginal probability p(symptoms) would change all of the posterior probabilities by the same proportion, and therefore has no effect on their relative magnitudes.

For example, if we arbitrarily decided to double the value of the marginal likelihood from 0.081 to 0.162 then both posterior probabilities would be halved (from 0.011 and 0.988 to about 0.005 and 0.494), but the posterior probability of chickenpox would still be 88.9 times larger than the posterior probability of smallpox. Indeed, the previous section on Bayes factors relies on the fact that the ratio of two posterior probabilities is independent of the value of the marginal probability.

In summary, the value of the marginal probability has no effect on which disease yields the largest posterior probability (eg Equations 1.10 and 1.11), and therefore has no effect on the decision regarding which disease the patient probably has.

1.2. Example 2: Forkandles

The example above is based on medical diagnosis, but Bayes' rule can be applied to any situation where there is uncertainty regarding the value of a measured quantity, such as the acoustic signal that reaches the ear when some words are spoken. The following example follows a similar line of argument as the previous one, and aside from the change in context, provides no new information for the reader to absorb.

If you walked into a hardware store and asked, "Have you got fork handles?", then you would be surprised to be presented with four candles. Even though the phrases "fork handles" and "four candles" are acoustically almost identical, the shop assistant knows that he sells many more candles than fork handles (Figure 1.5). This in turn, means that he probably does not even hear the words "fork handles", but instead hears "four candles". What has this got to do with Bayes' rule?

The acoustic data that correspond to the sounds spoken by the customer are equally consistent with two interpretations, but the assistant assigns a higher weighting to one interpretation. This weighting is based on his prior experience, so he knows that customers are more likely to request four candles than fork handles. The experience of the assistant allows him to hear what was probably said by the customer, even though the acoustic data was pretty ambiguous. Without knowing it, he has probably used something like Bayes' rule to hear what the customer probably said.

Figure 1.5. Thomas Bayes trying to make sense of a London accent, which removes the "h" sound from the word "handle", so the phrase fork handles is pronounced "fork 'andles", and therefore sounds like "four candles" (see Fork Handles YouTube clip by The Two Ronnies).

Likelihood: answering the wrong question

Given that the two possible phrases are "four candles" and "fork handles", we can formalise this scenario by considering the probability of the acoustic data given each of the two possible phrases.

In both cases, the probability of the acoustic data depends on the words spoken, and this dependence is made explicit as two probabilities:

  1. The probability of the acoustic data given four candles was spoken.

  2. The probability of the acoustic data given fork handles was spoken.

A short-hand way of writing these is

p(acoustic data|four candles)

p(acoustic data|fork handles),

(1.30)

where the expression p(acoustic data|four candles), for example, is interpreted as the likelihood that the phrase spoken was "four candles". As both phrases are consistent with the acoustic data, the probability of the data is almost the same in both cases. That is, the probability of the data given that "four candles" was spoken is almost the same as the probability of the data given that "fork handles" was spoken. For simplicity, we will assume that these probabilities are

p(data|four candles) = 0.6

p(data|fork handles) = 0.7.

(1.31)

Knowing these two likelihoods does allow us to find an answer, but it is an answer to the wrong question. Each likelihood above provides an answer to the (wrong) question: what is the probability of the observed acoustic data given that each of two possible phrases was spoken?

Posterior probability: answering the right question

The right question, the question to which we would like an answer is: what is the probability that each of the two possible phrases was spoken given the acoustic data? The answer to this, the right question, is implicit in two new conditional probabilities, the posterior probabilities

p(four candles|data)

p(fork handles|data),

(1.32)

as shown in Figures 1.6 and 1.7. Notice the subtle difference between the pair of Equations 1.31 and the pair 1.32. Equations 1.31 tells us the likelihoods, the probability of the data given two possible phrases, which turn out to be almost identical for both phrases in this example. In contrast, Equations 1.32 tells us the posterior probabilities, the probability of each phrase given the acoustic data.

Crucially, each likelihood tells us the probability of the data given a particular phrase, but takes no account of how often that phrase has been given (ie has been encountered) in the past. In contrast, each posterior probability depends, not only on the data (in the form of the likelihood), but also on how frequently each phrase has been encountered in the past; that is, on prior experience.

So, we want the posterior probability, but we have the likelihood. Fortunately, Bayes' rule provides a means of getting from the likelihood to the posterior, by making use of extra knowledge in the form of prior experience, as shown in Figure 1.6.

Prior probability

Let's suppose that the assistant has been asked for four candles a total of 90 times in the past, whereas he has been asked for fork handles only 10 times. To keep matters simple, let's also assume that the next customer will ask either for four candles or fork handles (we will revisit this simplification later). Thus, before the customer has uttered a single word, the assistant estimates that the probability that he will say each of the two phrases is

p(four candles) = 90/100 = 0.9

p(fork handles) = 10/100 = 0.1.

(1.33)

These two prior probabilities represent the prior knowledge of the assistant, based on his previous experience of what customers say.

When confronted with an acoustic signal that has one of two possible interpretations, the assistant naturally interprets this as "four candles", because, according to his past experience, this is what such ambiguous acoustic data usually means in practice. So, he takes the two almost equal likelihood values, and assigns a weighting to each one, a weighting that depends on past experience, as in Figure 1.7. In other words, he uses the acoustic data, and combines it with his previous experience to make an inference about which phrase was spoken.

Inference

One way to implement this weighting (ie to do this inference) is to simply multiply the likelihood of each phrase by how often that phrase has occurred in the past. In other words, we multiply the likelihood of each putative phrase by its corresponding prior probability. The result yields a posterior probability for each possible phrase.

Figure 1.6. A schematic representation of Bayes' rule. Data alone, in the form of acoustic data, can be used to find a likelihood value, which is the conditional probability of the acoustic data given some putative spoken phrase. When Bayes' rule is used to combine this likelihood with prior knowledge then the result is a posterior probability, which is the probability of the phrase given the observed acoustic data.

Figure 1.7. Bayesian inference applied to speech data.

(1.34)

where p(data) is the marginal likelihood, which is the probability of the observed data.

In order to ensure that the posterior probabilities sum to one, the value of p(data) is 0.61 in this example, but as we already know from Section 1.1 (p. 15), its value is not important for our purposes. If we substitute the likelihood and prior probability values defined in Equations 1.31 and 1.33 in 1.34 then we obtain their posterior probabilities as

These two posterior probabilities represent the answer to the right question, so we can now see that the probability that the customer said "four candles" is 0.885 whereas the probability that the customer said "fork handles" was 0.115. As "four candles" is associated with the highest value of the posterior probability, it is the maximum a posteriori (MAP) estimate of the phrase that was spoken. The process that makes use of evidence (symptoms) to produce these posterior probabilities is called Bayesian inference.

1.3. Example 3: Flipping coins

This example follows the same line of reasoning as those above, but also contains specific information on how to combine probabilities from independent events, such as coin flips. This will prove crucial in a variety of contexts, and in examples considered later in this book.

Here, our task is to decide how unfair a coin is, based on just two coin flips. Normally, we assume that coins are fair or unbiased, so that a large number of coin flips (eg 1,000) yields an equal number of heads and tails. But suppose there was a fault in the machine that minted coins, so that each coin had more metal on one side or the other, with the result that each coin is biased to produce more heads than tails, or vice versa.

Specifically, 25% of the coins produced by the machine have a bias of 0.4, and 75% have a bias of 0.6. By definition, a coin with a bias of 0.4 produces a head on 40% of flips, whereas a coin with a bias of 0.6 produces a head on 60% of flips (on average). Now, suppose we choose one coin at random, and attempt to decide which of the two bias values it has. For brevity, we define the coin's bias with the parameter θ, so the true value of θ for each coin is either θ0.4 = 0.4, or θ0.6 = 0.6.

One coin flip

Here we use one coin flip to define a few terms that will prove useful below. For each coin flip, there are two possible outcomes, a head xh , and a tail xt. For example, if the coin’s bias is θ0.6 then, by definition, the conditional probability of observing a head is θ0.6

p(xh|θ0.6) = θ0.6 = 0.6.

(1.36)

Similarly, the conditional probability of observing a tail is

p(xt|θ0.6) = (1 – θ0.6) = 0.4,

(1.37)

where both of these conditional probabilities are likelihoods. Note that we follow the convention of the previous examples by using θ to represent the parameter whose value we wish to estimate, and x to represent the data used to estimate the true value of θ.

Figure 1.8. Thomas Bayes trying to decide the value of a coin's bias.

Two coin flips

Consider a coin with a bias θ (where θ could be 0.4 or 0.6, for example). Suppose we flip this coin twice, and obtain a head xh followed by a tail xt , which define the ordered list or permutation

x = (xh, xt).

(1.38)

As the outcome of one flip is not affected by any other flip outcome, outcomes are said to be independent (see Section 2.2 or Appendix C). This independence means that the probability of observing any two outcomes can be obtained by multiplying their probabilities

p(x|θ) = p((xh , xt)|θ)

(1.39)

= p(xh|θ) × p(xt|θ).

(1.40)

More generally, for a coin with a bias θ, the probability of a head xh is p(xh|θ) = θ, and the probability of a tail xt is therefore p(xt|θ) = (1 − θ). It follows that Equation 1.40 can be written as

p(x|θ) = θ × (1 − θ),

(1.41)

which will prove useful below.

Figure 1.9. A schematic representation of Bayes' rule, applied to the problem of estimating the bias of a coin based on data which is the outcome of two coin flips.

The likelihoods of different coin biases

According to Equation 1.41, if the coin bias is θ0.6 then

p(x|θ0.6 ) = θ0.6 × (1 − θ0.6)

(1.42)

= 0.6 × 0.4

(1.43)

= 0.24,

(1.44)

and if the coin bias is θ0.4 then (the result is the same)

p(x|θ0.4 ) = θ0.4 × (1 − θ0.4)

(1.45)

= 0.4 × 0.6

(1.46)

= 0.24,

(1.47)

Note that the only difference between these two cases is the reversed ordering of terms in Equations 1.43 and 1.46, so that both values of θ have equal likelihood values. In other words, the observed data x are equally probable given the assumption that θ0.4 = 0.4 or θ0.6 = 0.6, so they do not help in deciding which bias our chosen coin has.

Figure 1.10. Bayesian inference applied to coin flip data.

Prior probabilities of different coin biases

We know (from above) that 25% of all coins have a bias of θ0.4, and that 75% of all coins have a bias of θ0.6. Thus, even before we have chosen our coin, we know (for example) there is a 75% chance that it has a bias of 0.6. This information defines the prior probability that any coin has one of two bias values, either p(θ0.4) = 0.25, or p(θ0.6) = 0.75.

Posterior probabilities of different coin biases

As in previous examples, we adopt the naïve strategy of simply weighting each likelihood value by its corresponding prior (and dividing by p(x)) to obtain Bayes' rule

(1.48)

(1.49)

In order to ensure posterior probabilities sum to one, we have assumed a value for the marginal probability of p(x) = 0.24 (but we know from p. 15 that its value makes no difference to our final decision about coin bias). As shown in Figures 1.9 and 1.10, the probabilities in Equations 1.48 and 1.49 take account of both the data and of prior experience, and are therefore posterior probabilities.

In summary, whereas the equal likelihoods in this example (Equations 1.44 and 1.47) did not allow us to choose between the coin biases θ0.4 and θ0.6 , the values of the posterior probabilities (Equations 1.48 and 1.49) imply that a bias of θ0.6 is 3 (=0.75/0.25) times more probable than a bias is θ0.4 .

1.4. Example 4: Light craters

When you look at Figure 1.11, do you see a hill or a crater? Now turn the page upside-down. When you invert the page, the content of the picture does not change, but what you see does change (from a hill to a crater). This illusion almost certainly depends on the fact that your visual system assumes that the scene is lit from above. This, in turn, forces you to interpret the Figure 1.11 as a hill, and the inverted version as a crater (which it is, in reality).

Figure 1.11. Is this a hill or a crater? Try turning the image upside-down. (Barringer crater, with permission, United States Geological Survey)

In terms of Bayes' rule, the image data are equally consistent with a hill and a crater, where each interpretation corresponds to a different maximum likelihood value. Therefore, in the absence of any prior assumptions on your part, you should see the image as depicting either a hill or a crater with equal probability. However, the assumption that light comes from above corresponds to a prior, and this effectively forces you to interpret the image as a hill or a crater, depending on whether the image is inverted or not.

Note that there is no uncertainty or noise; the image is perfectly clear, but also perfectly ambiguous without the addition of a prior regarding the light source. This example demonstrates that Bayesian inference is useful even when there is no noise in the observed data, and that even the apparently simple act of seeing requires the use of prior information10;40;41;42.

"Seeing is not a direct apprehension of reality, as we often like to pretend. Quite the contrary: seeing is inference from incomplete information..."

E T Jaynes, 2003 (p. 133)18.

1.5. Forward and inverse probability

If we are given a coin with a known bias of say, θ = 0.6, then the probability of a head for each coin flip is given by the likelihood p(xh|θ) = 0.6. This is an example of a forward probability, which involves calculating the probability of each of a number of different consequences (eg obtaining two heads) given some known cause or fact, see Figure 1.12. If this coin is flipped a 100 times then the number of heads could be 62, so the actual proportion of heads is xtrue = 0.62. But, because no measurement is perfectly reliable, we may mis-count 62 as 64 heads, so the measured proportion is x = 0.64.

Consequently, there is a difference, often called noise, between the true coin bias and the measured proportion of heads. The source of this noise may be due to the probabilistic nature of coin flips or to our inability to measure the number of heads accurately. Whatever the cause of the noise, the only information we have is the measured number of heads, and we must use this information as wisely as possible.

Figure 1.12. Forward and inverse probability.

Top: Forward probability. A parameter value θtrue (eg coin bias) which is implicit in a physical process (eg coin flipping) yields a quantity xtrue (eg proportion of heads), which is measured as x, using an imperfect measurement process.

Bottom: Inverse probability. Given a mathematical model of the physics that generated xtrue , the measured value x implies a range of possible values for the parameter θ. The probability of x given each possible value of θ defines a likelihood. When combined with a prior, each likelihood yields a posterior probability p(θ|x), which allows an estimate θest of θtrue to be obtained.

The converse of reasoning forwards from a given physical parameter or scenario involves a harder problem, also illustrated in Figure 1.12. Reasoning backwards from measurements (eg coin flips or images) amounts to finding the posterior or inverse probability of the value of an unobserved variable (eg coin bias, 3D shape), which is usually the cause of the observed measurement. By analogy, arriving at the scene of a crime, a detective must reason backwards from the clues, as eloquently expressed by Sherlock Holmes:

Most people, if you describe a train of events to them, will tell you what the result would be. They can put those events together in their minds, and argue from them that something will come to pass. There are few people, however, who, if you told them a result, would be able to evolve from their own inner consciousness what the steps were that led to that result. This power is what I mean when I talk of reasoning backward, or analytically.

Sherlock Holmes, from A Study in Scarlet.

A C Doyle (1887)

Indeed, finding inverse probabilities is precisely the problem Bayes' rule is designed to tackle.

Summary

All decisions should be based on evidence, but the best decisions should also be based on previous experience. The above examples demonstrate not only that prior experience is crucial for interpreting evidence, but also that Bayes' rule provides a rigorous method for doing so.

Note that this text includes corrections in the eighth printing (2017).

References

  1. Bayes, T. (1763). An essay towards solving a problem in the doctrine of chances. Phil Trans Roy Soc London, 53, 370–418.

  2. Beaumont, M. (2004). The Bayesian revolution in genetics. Nature Reviews Genetics, 251–261.

  3. Bernardo, J. (1979). Reference posterior distributions for Bayesian inference. J. Royal Statistical Society B, 41, 113–147.

  4. Bernardo, J. and Smith, A. (2000). Bayesian Theory. John Wiley and Sons Ltd.

  5. Bishop, C. (2006). Pattern recognition and machine learning. Springer.

  6. Cowan, G. (1998). Statistical data analysis. OUP.

  7. Cox, R. (1946). Probability, frequency, and reasonable expectation. American Journal of Physics, 14, 113.

  8. Dienes, Z. (2008). Understanding psychology as a science: An introduction to scientific and statistical inference. Palgrave Macmillan.

  9. Donnelly, P. (2005). Appealing statistics. Significance, 2(1), 46–48.

  10. Doya, K., Ishii, S., Pouget, A. and Rao, R. (2007). The Bayesian brain. MIT, MA.

  11. Efron, B. (1979). Bootstrap methods: Another look at the jackknife. Ann. Statist., 7(1), 1–26.

  12. Frank, M. C. and Goodman, N. D. (2012). Predicting pragmatic reasoning in language games. Science, 336(6084), 998.

  13. Geisler, W. and Diehl, R. (2002). Bayesian natural selection and the evolution of perceptual systems. Philosophical Transactions of the Royal Society London (B) Biology, 357, 419–448.

  14. Gelman, A., Carlin, J., Stern, H. and Rubin, D. (2003). Bayesian data analysis, second edition. Chapman and Hall, 2nd edition.

  15. Geman, S. and Geman, D. (1993). Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. Journal of Applied Statistics, 20, 25–62.

  16. Good, I. (1979). Studies in the history of probability and statistics. XXXVII A. M. Turing's statistical work in World War II. Biometrika, 66(2), 393–396.

  17. Hobson, M., Jaffe, A., Liddle, A. and Mukherjee, P. (2009). Bayesian methods in cosmology. Cambridge University Press.

  18. Jaynes, E. and Bretthorst, G. (2003). Probability theory: the logic of science. Cambridge University Press, Cambridge.

  19. Jeffreys, H. (1939). Theory of probability. Oxford University Press.

  20. Jones, M. and Love, B. (2011). Bayesian fundamentalism or enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences, 34, 192–193.

  21. Kadane, J. (2009). Bayesian thought in early modern detective stories: Monsieur Lecoq, C. Auguste Dupin and Sherlock Holmes. Statistical Science, 24(2), 238–243.

  22. Kersten, D., Mamassian, P. and Yuille, A. (2004). Object perception as Bayesian inference. Ann Rev Psychology, 55(1), 271–304.

  23. Knill, D. and Richards, R. (1996). Perception as Bayesian inference. Cambridge University Press, New York, NY, USA.

  24. Kolmogorov, A. (1933). Foundations of the theory of probability. Chelsea Publishing Company (English translation, 1956).

  25. Land, M. and Nilsson, D. (2002). Animal eyes. OUP.

  26. Lawson, A. (2008). Bayesian disease mapping: Hierarchical Modeling in spatial epidemiology. Chapman and Hall.

  27. Lee, P. (2004). Bayesian statistics: An introduction. Wiley.

  28. MacKay, D. (2003). Information theory, inference, and learning algorithms. Cambridge University Press.

  29. McGrayne, S. (2011). The theory that would not die. YUP.

  30. Migon, H. and Gamerman, D. (1999). Statistical inference: An integrated approach. Arnold.

  31. Oaksford, M. and Chater, N. (2007). Bayesian rationality: The probabilistic approach to human reasoning. Oxford University Press.

  32. Parent, E. and Rivot, E. (2012). Bayesian modeling of ecological data. Chapman and Hall.

  33. Penny, W. D., Trujillo-Barreto, N. J. and Friston, K. J. (2005). Bayesian fMRI time series analysis with spatial priors. NeuroImage, 24(2), 350–362.

  34. Pierce, J. (1961 reprinted by Dover 1980). An introduction to information theory: Symbols, signals and noise. Dover (2nd Edition).

  35. Reza, F. (1961). Information theory. New York, McGraw-Hill.

  36. Rice, D. and Spiegelhalter, K. (2009). Bayesian statistics. Scholarpedia, 4(3), 5230.

  37. Simpson, E. (2010). Edward Simpson: Bayes at Bletchley park. Significance, 7(2).

  38. Sivia, D. and Skilling, J. (2006). Data analysis: A Bayesian tutorial. OUP.

  39. Stigler, S. (1983). Who discovered Bayes's theorem? The American Statistician, 37(4), 290–296.

  40. Stone, J. (2011). Footprints sticking out of the sand (part II): Children's Bayesian priors for lighting direction and convexity. Perception, 40(2), 175–190.

  41. Stone, J. (2012). Vision and brain: How we perceive the world. MIT Press.

  42. Stone, J., Kerrigan, I. and Porrill, J. (2009). Where is the light? Bayesian perceptual priors for lighting direction. Proceedings Royal Society London (B), 276, 1797–1804.

  43. Taroni, F., Aitken, C., Garbolino, P. and Biedermann, A. (2006). Bayesian networks and probabilistic inference in forensic science. Wiley.

  44. Tenenbaum, J. B., Kemp, C., Griffiths, T. L. and Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279–1285.