2.1 Empiricism: How to Know Stuff

When ancient Greeks sprained their ankles, caught the flu, or accidentally set their togas on fire, they had to choose between two kinds of doctors: dogmatists (from dogmatikos, meaning “belief”), who thought that the best way to understand illness was to develop theories about the body’s functions, and empiricists (from empeirikos, meaning “experience”), who thought that the best way to understand illness was to observe sick people. The rivalry between these two schools of medicine did not last long because the people who went to see dogmatists tended to die, which was bad for business. Today we use the word dogmatism to describe the tendency for people to cling to their assumptions, and the word empiricism to describe the belief that accurate knowledge can be acquired through observation. The fact that we can answer questions about the natural world by examining it may seem painfully obvious to you, but this painfully obvious fact has only recently gained wide acceptance. For most of human history, people trusted authority to answer important questions, and it is only in the last millennium (and especially in the past three centuries) that people have begun to trust their eyes and ears more than their elders.

The astronomer Galileo Galilei (1564–1642) was excommunicated and sentenced to prison for sticking to his own observations of the solar system rather than accepting the teachings of the church. In 1597 he wrote to his friend and fellow astronomer Johannes Kepler (1571–1630), “What would you say of the learned here, who, replete with the pertinacity of the asp, have steadfastly refused to cast a glance through the telescope? What shall we make of this? Shall we laugh, or shall we cry?” As it turned out, the answer was cry.
BETTMANN/CORBIS

2.1.1 The Scientific Method

What is the scientific method?

Empiricism is the essential element of the scientific method, which is a procedure for finding truth by using empirical evidence. In essence, the scientific method suggests that when we have an idea about the world—about how bats navigate, or where the moon came from, or why people cannot forget traumatic events—we should gather empirical evidence relevant to that idea and then modify the idea to fit with the evidence. Scientists usually refer to an idea of this kind as a theory, which is a hypothetical explanation of a natural phenomenon. We might theorize that bats navigate by making sounds and then listening for the echo, that the moon was formed when a small planet collided with the Earth, or that the brain responds to traumatic events by producing chemicals that facilitate memory. Each of these theories is an explanation of how something in the natural world works.

41

Classical thinkers like Euclid and Ptolemy believed that our eyes work by emitting rays that travel to the objects we see. Ibn al-Haytham (965 CE–1039 CE) reasoned that if this were true, then when we open our eyes it should take longer to see something far away than something nearby. And guess what? It does not. And with that single observation, a centuries-old theory vanished—in the blink of an eye.
SCIENCE SOURCE/COLOURIZATION BY:MARY MARTINScientists

When scientists set out to develop a theory, they generally follow the rule of parsimony, which says that the simplest theory that explains all the evidence is the best one. Parsimony comes from the Latin word parcere, meaning “to spare,” and the rule is often credited to the fourteenth-century logician William Ockham, who wrote “Plurality should not be posited without necessity.” Ockham was not suggesting that nature is simple or that complex theories are wrong. He was merely suggesting that it makes sense to start with the simplest theory and then make the theory more complicated only if one must. Part of what makes E = mc2 such a lovely theory is that it expresses all the necessary information in exactly three letters and one number.

We want our theories to be as simple as possible, but we also want them to be right. How do we decide if a theory is right? Theories make specific predictions about what we should observe in the world. For example, if bats really do navigate by making sounds and then listening for echoes, then we should observe that deaf bats cannot navigate. That “should statement” is technically known as a hypothesis, which is a falsifiable prediction made by a theory. The word falsifiable is a critical part of that definition. Some theories, such as “God created the universe,” simply do not specify what we should observe if they are true, and thus no observation can ever falsify them. Because these theories do not give rise to hypotheses, they can never be the subject of scientific investigation. That does not mean they are wrong—it just means that we cannot evaluate them by using the scientific method.

Why can theories be proven wrong but not right?

So what happens when we test a hypothesis? Albert Einstein is reputed to have said that, “No amount of experimentation can ever prove me right, but a single experiment can prove me wrong.” Why should that be? Well, just imagine what you could possibly learn about the navigation-by-sound theory if you observed a few bats. If you saw the deaf bats navigating every bit as well as the hearing bats, then the navigation-by-sound theory would instantly be proved wrong; but if you saw the deaf bats navigating more poorly than the hearing bats, your observation would be consistent with the navigation-by-sound theory but would not prove it. After all, even if you did not see a deaf bat navigating perfectly today, it is still possible that someone else did, or that you will see one tomorrow.

Scientists once believed that all combustible objects contain an element called phlogiston that is released during burning. But in 1779, the chemist Antoine Lavoisier (1734–1794) demonstrated that when metals such as mercury are burned, they do not get lighter, as the phlogiston theory said they must. Pictured here are Lavoisier (left) and his theory-killing apparatus (right).
SEFANO BIANCHETTI/CORBIS and INTERFOTO/ALAMY

42

We cannot observe every bat that has ever been and will ever be, which means that even if the theory was not disproved by your observation there always remains some chance that it will be disproved by some other observation. When evidence is consistent with a theory, it increases our confidence in it, but it never makes us completely certain. The next time you see a newspaper headline that says “Scientists prove theory X correct,” you are hereby authorized to roll your eyes.

The scientific method suggests that the best way to learn the truth about the world is to develop theories, derive hypotheses from them, test those hypotheses by gathering evidence, and then use that evidence to modify the theories. But what exactly does gathering evidence entail?

2.1.2 The Art of Looking

Frames 2 and 3 of this historic photo by Eadweard Muybridge (1830–1904) show that horses can indeed fly, albeit briefly and only in coach.
EADWEARD MUYBRIDGE/CORBIS

For centuries, people rode horses. And for centuries when they got off their horses they sat around and argued about whether all four of a horse’s feet ever leave the ground at the same time. Some said yes, some said no, and some said they really wished they could talk about something else for a change. In 1877, Eadweard Muybridge invented a technique for taking photographs in rapid succession, and his photographs showed that when horses gallop, all four feet do indeed leave the ground. And that was that. Never again did two riders have the pleasure of a flying horse debate because Muybridge had settled the matter, once and for all time.

But why did it take so long? After all, people had been watching horses gallop for quite a few years, so why did some say that they clearly saw the horse going airborne while others said that they clearly saw at least one hoof on the ground at all times? Because as wonderful as eyes may be, there are a lot of things they cannot see and a lot of things they see incorrectly. We cannot see germs but they are very real. Earth looks perfectly flat but it is imperfectly round. As Muybridge knew, we have to do more than just look if we want to know the truth about the world. Empiricism is the right approach, but to do it properly requires an empirical method, a set of rules and techniques for observation.

In the 1930s, the Hawthorne Plant, a factory for telephone parts outside of Chicago, commissioned a study to examine whether a change in shop-floor lighting would improve worker’s productivity. And indeed, when the lighting was made brighter, productivity increased. However, when the lighting was dimmed, productivity also increased! It did not matter what was changed: productivity always went up (for a short time). In general, people behave differently when they know they are being watched compared to when they do not—this phenomenon is now known as the Hawthorne effect.
LEONARD MCCOMBE/GETTY IMAGES

In many sciences, the word method refers primarily to technologies that enhance the powers of the senses. Biologists use microscopes and astronomers use telescopes because the things they want to observe are invisible to the naked eye. Human behaviour, on the other hand, is quite visible, so you might expect psychology’s methods to be relatively simple. In fact, the empirical challenges facing psychologists are among the most daunting in all of modern science, thus psychology’s empirical methods are among the most sophisticated in all of modern science. These empirical challenges arise because people have three qualities that make them unusually difficult to study:

What makes human beings especially difficult to study?

The fact that human beings are complex, variable, and reactive presents a major challenge to the scientific study of their behaviour, and psychologists have developed two kinds of methods that are designed to meet these challenges head-on: methods of observation, which allow them to determine what people do, and methods of explanation, which allow them to determine why people do it. We will examine both of these methods in the sections that follow.

  • Empiricism is the belief that the best way to understand the world is to observe it firsthand. It is only in the last few centuries that empiricism has come to prominence.

  • Empiricism is at the heart of the scientific method, which suggests that our theories about the world give rise to falsifiable hypotheses, and that we can thus make observations that test those hypotheses. The results of these tests can disprove our theories but cannot prove them.

  • Observation does not just mean “looking.” It requires a method. The methods of psychology are special because, more than most other natural phenomena, human beings are complex, variable, and reactive.