Chapter 1. Experimental Design

Introduction

Experiments provide one way to make sense of the world. There are many different kinds of experiments, some of which begin with observations. Charles Darwin began with all kinds of observations—the relationship between living organisms and fossils, the distribution of organisms on the Earth, species found on islands and nowhere else—and inferred an evolutionary process to explain what he saw. Other experiments begin with data collection. For example, genome studies begin by collecting vast amounts of data—the sequence of nucleotides in all of the DNA of an organism—and then ask questions about the patterns that are found.

Such observations can lead to questions – Why are organisms adapted to their environment? Why are there so many endemic species (organisms found in one place and nowhere else) on islands? Why does the human genome contain vast stretches of DNA that do not code for protein?

Types of Hypotheses

A hypothesis, as we saw in Chapter 1, is a tentative answer to the question, an expectation of what the results might be. This might at first seem counterintuitive. Science, after all, is supposed to be unbiased, so why should you expect any particular result at all? The answer is that it helps to organize the experimental setup and interpretation of the data.

Let’s consider a simple example. We design a new medicine and hypothesize that it can be used to treat headaches. This hypothesis is not just a hunch—it is based on previous observations or experiments. For example, we might observe that the chemical structure of the medicine is similar to other drugs that we already know are used to treat headaches. If we went into the experiment with no expectation at all, it would be unclear what to measure.

A hypothesis is considered tentative because we don’t know what the answer is. The answer has to wait until we conduct the experiment and look at the data. When an experiment predicts a specific effect, as in the case of the new medicine, it is typical to also state a null hypothesis, which predicts no effect. Hypotheses are never proven, but it is possible based on statistical analysis to reject a hypothesis. When a null hypothesis is rejected, the hypothesis gains support.

Sometimes, we formulate several alternative hypotheses to answer a single question. This may be the case when researchers consider different explanations of their data. Let’s say for example that we discover a protein that represses the expression of a gene. Our question might be: How does the protein repress the expression of the gene? In this case, we might come up with several models—the protein might block transcription, it might block translation, or it might interfere with the function of the protein product of the gene. Each of these models is an alternative hypothesis, one or more of which might be correct.

Testing Hypotheses: Controls

Hypotheses can be tested in various ways. One way is through additional observations. There are a large number of endemic species on the Galápagos Islands. We might ask why and hypothesize that it has something to do with the location of the islands relative to the mainland. To test our hypothesis, we might make additional observations. We could count the number of endemic species on many different islands, calculate the size of each of these islands, and measure the distance from the nearest mainland. From these observations, we can understand the conditions that lead to endemic species on islands.

Hypotheses can also be tested through controlled experiments. In a controlled experiment, several different groups are tested simultaneously, keeping as many variables the same among them. In one group, a single variable is changed, allowing the researcher to see if that variable has an effect on the results of the experiment. This is called the test group. In another group, the variable is not changed and no effect is expected. This group is called the negative control. Finally, in a third group, a variable is introduced that has a known effect to be sure that the experiment is working properly. This group is called the positive control.

Controls such as negative and positive control groups are operations or observations that are set up in such a way that the researcher knows in advance what result should be expected if everything in the study is working properly. Controls are performed at the same time and under the same conditions as an experiment to verify the reliability of the components of the experiment, the methods, and analysis.

For example, going back to our example of a new medicine that might be effective against headaches, you could design an experiment in which there are three groups of patients—one group receives the medicine (the test group), one group receives no medicine (the negative control group), and one group receives a medicine that is already known to be effective against headaches (the positive control group). All of the other variables, such as age, gender, and socioeconomic background, would be similar among the three groups.

These three groups help the researchers to make sense of the data. Imagine for a moment that there was just the test group with no control groups, and the headaches went away after treatment. You might conclude that the medicine alleviates headaches. But perhaps the headaches just went away on their own. The negative control group helps you to see what would happen without the medicine so you can determine which effects in the test group are due solely to the medicine.

In some cases, researchers control not just for the medicine (one group receives medicine and one does not), but also for the act of giving a medicine. In this case, one negative control involves giving no medicine, and another involves giving a placebo, which is a sugar pill with no physiological effect. In this way, the researchers control for the potential variable of taking medication. In general, for a controlled experiment, it is important to be sure that there is only one difference between the test and control groups.

Testing Hypotheses: Variables

When performing experiments, researchers manipulate the test group differently from the control groups. This difference is known as a variable. There are two types of variables. An independent variable is the manipulation performed on the test group by the researchers. It is considered “independent” because the researchers could choose any variable they wish. The dependent variable is the effect that is being measured. It is considered “dependent” because the expectation is that it depends on the variable that was changed. In our example of the headache medicine, the independent variable is the type of medicine (new medicine, no medicine, placebo, or medicine known to be effective). The dependent variable is the presence or absence of headache following treatment.

In designing experiments, there is an additional issue to consider: the size of each of our groups. In order to draw conclusions from our data, we need to make sure that our results are valid and reproducible, and not merely the result of chance. One way to minimize the effect of chance is to include a large number of patients in each group. How many? The sample size is the number of independent data points and is determined based on probability and statistics, the subject of the next primer.

Question

vEDO9GZw0LCQk2rE+Om4A7wINCeoZCMhjYA51F94hliH7tbkO9Wep8sLqwyG1aIAs9nJPLXEZ3c=