Induction

Deduction involves logical thinking that applies to absolutely any assertion or claim — because every possible statement, true or false, has deductive logical consequences. Induction is relevant to one kind of assertion only; namely, to empirical or factual claims. Other kinds of assertions (such as definitions, mathematical equations, and moral or legal norms) simply are not the product of inductive reasoning and cannot serve as a basis for further inductive thinking.

346

And so, in studying the methods of induction, we are exploring tactics and strategies useful in gathering and then using evidence — empirical, observational, experimental — in support of a belief as its ground. Modern scientific knowledge is the product of these methods, and they differ somewhat from one science to another because they depend on the theories and technology appropriate to each of the sciences. Here all we can do is discuss generally the more abstract features common to inductive inquiry generally. For fuller details, you must eventually consult a physicist, chemist, geologist, or their colleagues and counterparts in other scientific fields.

OBSERVATION AND INFERENCE

Let’s begin with a simple example. Suppose we have evidence (actually we don’t, but that won’t matter for our purposes) in support of the claim that

1. In a sample of 500 smokers, 230 persons observed have cardiovascular disease.

The basis for asserting 1 — the evidence or ground — would be, presumably, straightforward physical examination of the 500 persons in the sample, one by one.

With this claim in hand, we can think of the purpose and methods of induction as pointing in two opposite directions: toward establishing the basis or ground of the very empirical proposition with which we start (in this example, the observation stated in 1) or toward understanding what that observation indicates or suggests as a more general, inclusive, or fundamental fact of nature.

In each case, we start from something we do know (or take for granted and treat as a sound starting point) — some fact of nature, perhaps a striking or commonplace event that we have observed and recorded — and then go on to something we do not fully know and perhaps cannot directly observe. In example 1, only the second of these two orientations is of any interest, so let’s concentrate exclusively on it. Let’s also generously treat as a method of induction any regular pattern or style of nondeductive reasoning that we could use to support a claim such as that in 1.

Anyone truly interested in the observed fact that 230 of 500 smokers have cardiovascular disease is likely to start speculating about, and thus be interested in finding out, whether any or all of several other propositions are also true. For example, one might wonder whether

2. All smokers have cardiovascular disease or will develop it during their lifetimes.

This claim is a straightforward generalization of the original observation as reported in claim 1. When we think inductively about the linkage between 1 and 2, we are reasoning from an observed sample (some smokers — i.e., 230 of the 500 observed) to the entire membership of a more inclusive class (all smokers, whether observed or not). The fundamental question raised by reasoning from the narrower claim 1 to the broader claim 2 is whether we have any ground for believing that what is true of some members of a class is true of them all. So the difference between 1 and 2 is that of quantity or scope.

347

We can also think inductively about the relation between the factors mentioned in 1. Having observed data as reported in 1, we may be tempted to assert a different and more profound kind of claim:

3. Smoking causes cardiovascular disease.

Here our interest is not merely in generalizing from a sample to a whole class; it is the far more important one of explaining the observation with which we began in claim 1. Certainly, the preferred, even if not the only, mode of explanation for a natural phenomenon is a causal explanation. In proposition 3, we propose to explain the presence of one phenomenon (cardiovascular disease) by the prior occurrence of an independent phenomenon (smoking). The observation reported in 1 is now serving as evidence or support for this new conjecture stated in 3.

Our original claim in 1 asserted no causal relation between anything and anything else; whatever the cause of cardiovascular disease may be, that cause is not observed, mentioned, or assumed in assertion 1. Similarly, the observation asserted in claim 1 is consistent with many explanations. For example, the explanation of 1 might not be 3, but some other, undetected, carcinogenic factor unrelated to smoking — for instance, exposure to high levels of radon. The question one now faces is what can be added to 1, or teased out of it, to produce an adequate ground for claiming 3. (We shall return to this example for closer scrutiny.)

But there is a third way to go beyond 1. Instead of a straightforward generalization, as we had in 2, or a pronouncement on the cause of a phenomenon, as in 3, we might have a more complex and cautious further claim in mind, such as this:

4. Smoking is a factor in the causation of cardiovascular disease in some persons.

This proposition, like 3, advances a claim about causation. But 4 is obviously a weaker claim than 3. That is, other observations, theories, or evidence that would require us to reject 3 might be consistent with 4; evidence that would support 4 could easily fail to be enough to support 3. Consequently, it is even possible that 4 is true although 3 is false, because 4 allows for other (unmentioned) factors in the causation of cardiovascular disease (e.g., genetic or dietary factors) that may not be found in all smokers.

image

Propositions 2, 3, and 4 differ from proposition 1 in an important respect. We began by assuming that 1 states an empirical fact based on direct observation, whereas these others do not. Instead, they state empirical hypotheses or conjectures — tentative generalizations not fully confirmed — each of which goes beyond the observed facts asserted in 1. Each of 2, 3, and 4 can be regarded as an inductive inference from 1. We can also say that 2, 3, and 4 are hypotheses relative to 1, even if they are not relative to some other starting point (such as all the information that scientists today really have about smoking and cardiovascular disease).

PROBABILITY

348

Another way of formulating the last point is to say that whereas proposition 1, a statement of observed fact (230 out of 500 smokers have cardiovascular disease), has a probability of 1.0 — that is, it is absolutely certain — the probability of each of the hypotheses stated in 2, 3, and 4, relative to 1, is smaller than 1.0. (We need not worry here about how much smaller than 1.0 the probabilities are, nor about how to calculate these probabilities precisely.) Relative to some starting point other than 1, however, the probability of the same three hypotheses might be quite different. Of course, it still wouldn’t be 1.0, absolute certainty. But it takes only a moment’s reflection to realize that no matter what the probability of 2 or 3 or 4 may be relative to 1, those probabilities in each case will be quite different relative to different information, such as this:

5. Ten persons observed in a sample of 500 smokers have cardiovascular disease.

The idea that a given proposition can have different probabilities relative to different bases is fundamental to all inductive reasoning. The following example makes a convincing illustration. Suppose we want to consider the probability of this proposition being true:

6. Susanne Smith will live to be eighty.

Taken as an abstract question of fact, we cannot even guess what the probability is with any assurance. But we can do better than guess; we can in fact even calculate the answer, if we get some further information. Thus, suppose we are told that

7. Susanne Smith is seventy-nine.

Our original question then becomes one of determining the probability that 6 is true given 7; that is, relative to the evidence contained in proposition 7. No doubt, if Susanne Smith really is seventy-nine, then the probability that she will live to be eighty is greater than if we know only that

8. Susanne Smith is more than nine years old.

Obviously, a lot can happen to Susanne in the seventy years between nine and seventy-nine that isn’t very likely to happen in the one year between seventy-nine and eighty. And so, proposition 6 is more probable relative to proposition 7 than it is relative to proposition 8.

Let’s suppose for the sake of the argument that the following is true:

9. Ninety percent of women alive at age seventy-nine live to be eighty.

Given this additional information, and the information that Susanne is seventy-nine, we now have a basis for answering our original question about proposition 6 with some precision. But suppose, in addition to 8, we are also told that

10. Susanne Smith is suffering from inoperable cancer.

and also that

349

11. The survival rate for women suffering from inoperable cancer is 0.6 years (i.e., the average life span for women after a diagnosis of inoperable cancer is about seven months).

With this new information, the probability that 6 will be true drops significantly, all because we can now estimate the probability in relation to a new body of evidence.

The probability of an event, thus, is not a fixed number but one that varies because it is always relative to some evidence — and given different evidence, one and the same event can have different probabilities. In other words, the probability of any event is always relative to how much is known (assumed, believed), and because different persons may know different things about a given event, or the same person may know different things at different times, one and the same event can have two or more probabilities. This conclusion is not a paradox but a logical consequence of the concept of what it is for an event to have (i.e., to be assigned) a probability.

If we shift to the calculation of probabilities, we find that generally there are two ways to calculate them. One way to proceed is by the method of a priori or equal probabilities — that is, by reference to the relevant possibilities taken abstractly and apart from any other information. Thus, in an election contest with only two candidates, Smith and Jones, each of the candidates has a fifty-fifty chance of winning (whereas in a three-candidate race, each candidate would have one chance in three of winning). Therefore, the probability that Smith will win is 0.5, and the probability that Jones will win is also 0.5. (The sum of the probabilities of all possible independent outcomes must always equal 1.0, which is obvious enough if you think about it.)

But in politics the probabilities are not reasonably calculated so abstractly. We know that many empirical factors affect the outcome of an election and that a calculation of probabilities in ignorance of those factors is likely to be drastically misleading. In our example of the two-candidate election, suppose Smith has strong party support and is the incumbent, whereas Jones represents a party long out of power and is further handicapped by being relatively unknown. No one who knows anything about electoral politics would give Jones the same chance of winning as Smith. The two events are not equiprobable in relation to all the information available.

Moreover, a given event can have more than one probability. This happens whenever we calculate a probability by relying on different bodies of data that report how often the event in question has been observed to happen. Probabilities calculated in this way are relative frequencies. Our earlier hypothetical example of Susanne Smith provides an illustration. If she is a smoker and we have observed that 100 out of a random set of 500 smokers have cardiovascular disease, we have a basis for claiming that she has a probability of 100 in 500, or 0.2 (one-fifth), of having this disease. However, if other data have shown that 250 out of 500 women smokers ages eighty or older have cardiovascular disease, we have a basis for believing that there is a probability of 250 in 500, or 0.5 (one-half), that she has this disease. Notice that in both calculations we assume that Susanne Smith is not among the persons we have examined. In both cases we infer the probability of her having this disease from observing its frequency in populations that exclude her.

350

Both methods of calculating probabilities are legitimate; in each case the calculation is relative to observed circumstances. But as the examples show, it is most reasonable to have recourse to the method of equiprobabilities only when few or no other factors affecting possible outcomes are known.

image
S. Harris/Cartoonstock

MILL’S METHODS

Now let’s return to our earlier discussion of smoking and cardiovascular disease and consider in greater detail the question of a causal connection between the two phenomena. We began thus:

1. In a sample of 500 smokers, 230 persons observed have cardiovascular disease.

We regarded 1 as an observed fact, though in truth, of course, it is mere supposition. Our question now is how we might augment this information so as to strengthen our confidence that

3. Smoking causes cardiovascular disease.

or at least that

4. Smoking is a factor in the causation of cardiovascular disease in some persons.

Suppose further examination showed that

12. In the sample of 230 smokers with cardiovascular disease, no other suspected factor (such as genetic predisposition, lack of physical exercise, age over fifty) was also observed.

Such an observation would encourage us to believe that 3 or 4 is true. Why? Because we’re inclined to believe also that no matter what the cause of a phenomenon is, it must always be present when its effect is present. Thus, the inference from 1 to 3 or 4 is supported by 12, using Mill’s Method of Agreement, named after the British philosopher, John Stuart Mill (1806–1873), who first formulated it. It’s called a method of agreement because of the way in which the inference relies on agreement among the observed phenomena where a presumed cause is thought to be present.

351

Let’s now suppose that in our search for evidence to support 3 or 4 we conduct additional research and discover that

13. In a sample of 500 nonsmokers, selected to be representative of both sexes, different ages, dietary habits, exercise patterns, and so on, none is observed to have cardiovascular disease.

This observation would further encourage us to believe that we had obtained significant additional confirmation of 3 or 4. Why? Because we now know that factors present (such as male sex, lack of exercise, family history of cardiovascular disease) in cases where the effect is absent (no cardiovascular disease observed) cannot be the cause. This is an example of Mill’s Method of Difference, so called because the cause or causal factor of an effect must be different from whatever factors are present when the effect is absent.

Suppose now that, increasingly confident we’ve found the cause of cardiovascular disease, we study our first sample of 230 smokers ill with the disease, and we discover this:

14. Those who smoke two or more packs of cigarettes daily for ten or more years have cardiovascular disease either much younger or much more severely than those who smoke less.

This is an application of Mill’s Method of Concomitant Variation, perhaps the most convincing of the three methods. Here we deal not merely with the presence of the conjectured cause (smoking) or the absence of the effect we are studying (cardiovascular disease), as we were previously, but with the more interesting and subtler matter of the degree and regularity of the correlation of the supposed cause and effect. According to the observations reported in 14, it strongly appears that the more we have of the “cause” (smoking), the sooner or the more intense the onset of the “effect” (cardiovascular disease).

Notice, however, what happens to our confirmation of 3 and 4 if, instead of the observation reported in 14, we had discovered that

15. In a representative sample of 500 nonsmokers, cardiovascular disease was observed in 34 cases.

(We won’t pause here to explain what makes a sample more or less representative of a population, although the representativeness of samples is vital to all statistical reasoning.) Such an observation would lead us almost immediately to suspect some other or additional causal factor: Smoking might indeed be a factor in causing cardiovascular disease, but it can hardly be the cause because (using Mill’s Method of Difference) we cannot have the effect, as we do in the observed sample reported in 15, unless we also have the cause.

An observation such as the one in 15, however, is likely to lead us to think our hypothesis that smoking causes cardiovascular disease has been disconfirmed. But we have a fallback position ready — we can still defend a weaker hypothesis; namely, 4: Smoking is a factor in the causation of cardiovascular disease in some persons. Even if 3 stumbles over the evidence in 15, 4 does not. It is still quite possible that smoking is a factor in causing this disease, even if it isn’t the only factor — and if it is, then 4 is true.

352

CONFIRMATION, MECHANISM, AND THEORY

Notice that in the discussion so far, we have spoken of the confirmation of a hypothesis, such as our causal claim in 4, but not of its verification. (Similarly, we have imagined very different evidence, such as that stated in 15, leading us to speak of the disconfirmation of 3, though not of its falsification.) Confirmation (getting some evidence for) is weaker than verification (getting sufficient evidence to regard as true), and our (imaginary) evidence so far in favor of 4 falls well short of conclusive support. Further research — the study of more representative or much larger samples, for example — might yield very different observations. It might lead us to conclude that although initial research had confirmed our hypothesis about smoking as the cause of cardiovascular disease, the additional information obtained subsequently disconfirmed the hypothesis. For most interesting hypotheses, both in detective stories and in modern science, there is both confirming and disconfirming evidence simultaneously. The challenge is to evaluate the hypothesis by considering such conflicting evidence.

As long as we confine our observations to correlations of the sort reported in our several (imaginary) observations, such as proposition 1, 230 smokers in a group of 500 have cardiovascular disease, or 12, 230 smokers with the disease share no other suspected factors, such as lack of exercise, any defense of a causal hypothesis such as claim 3, Smoking causes cardiovascular disease, or claim 4, Smoking is a factor in causing the disease, is not likely to convince the skeptic or lead those with beliefs alternative to 3 and 4 to abandon them and agree with us. Why is that? It is because a causal hypothesis without any account of the underlying mechanism by means of which the (alleged) cause produces the effect will seem superficial. Only when we can specify in detail how the (alleged) cause produces the effect will the causal hypothesis be convincing.

In other cases, in which no mechanism can be found, we seek instead to embed the causal hypothesis in a larger theory, one that rules out as incompatible any causal hypothesis except the favored one. (That is, we appeal to the test of consistency and thereby bring deductive reasoning to bear on our problem.) Thus, perhaps we cannot specify any mechanism — any underlying structure that generates a regular sequence of events, one of which is the effect we are studying — to explain why, for example, the gravitational mass of a body causes it to attract other bodies. But we can embed this claim in a larger context of physical theory that rules out as inconsistent any alternative causal explanation. To do that convincingly in regard to any given causal hypothesis, as this example suggests, requires detailed knowledge of the current state of the relevant body of scientific theory — something far beyond our need to consider in further detail here.