[music playing]
Psychological research at its fundamental level aims to reveal how human beings think and feel and act.
Throughout history some have thought the best way to understand the human condition and to treat disease and dysfunction is to develop theories based on one's own intuition— first-person accounts that just feel right.
Others have arrived at a different method— direct observation and experimentation. These are the empiricists, and their method is the scientific one. The scientific method is at the heart of all research, including research in psychology.
And we do this by observing people's natural behavior, by correlating various components of their behavior, and by doing experiments that reveal cause and effect influences on their behavior and their thinking and their emotions.
The idea behind an experiment is really quite simple. If exercise improves memory, then if we make people exercise, we should see an improvement in their memory. If we wiggle the cause, we should see wiggling in the effect.
If we form an idea about human behavior— maybe what makes people happy, for example— we will organize that into a theory which seeks to explain happiness and predicts when it should happen.
So, for example, if we theorize that social support and social connection is important for human happiness, that explanation may then lead us to make predictions that maybe married people would be happier than never-married people. Or people with lots of close friends would be happier than people who are basically alone. Or maybe people who have lots of internet friends would be happier than those who have none.
And then we can check out those ideas and then refine our theory. Maybe we find that face-to-face relationships are important for happiness, but other kinds of connections don't matter so much.
If we have a hypothesis that we want to test, we would then observe behavior, either in the laboratory or in real life. We will record the data. We will aggregate them or collect them and analyze them. And then we will report them in a way that allows other psychological scientists to repeat our observations and either confirm or refute them.
We have various tests in psychological science. And we care about their reliability, which is consistency of measurement if we retest people. And we care about their validity, which is knowing that the test actually measures what it purports to measure.
So for example, about an intelligence test we might ask, is it reliable? Does it give consistent results over time? And is it valid? Does it in fact measure or assess cognitive ability and predict the sorts of behavior that it should predict if it is doing so?
Having developed a theory that has led to a hypothesis, how do we go about testing that hypothesis? Some of the most common research methods include surveys, naturalistic observation, and case studies.
Surveys are a technique for allowing us to generalize to populations. And the way to do that is to representatively— that is randomly— sample a group of people. So, for example, 1,500 people represent the nation quite well. And we have lots of surveys— election surveys being one example— that do that.
The simplest method that we have is plain old observation. You can just watch. There have been social psychologists, for example, who have planted themselves in restaurants or even in homes or malls, and taking notes on what people are doing.
You can only learn so much that way, but you can see patterns. Patterns of behavior can emerge that way, which is fine. You could see the way people interact with each other. And you could even develop coding systems for characterizing how people interact. There's a lot you could do. But there are so many aspects of human behavior you would never learn about that way.
Case studies are in-depth observations of single individuals. Sigmund Freud built his ideas on a very few case studies.
In case studies, the sample size is generally low, or even a single individual whose experience or behavior is analyzed in detail. While It does present the shortcoming of dealing only with a limited amount of subjects, the case study is a powerful tool to explore individual experience.
While there is a lot we can learn from these methods, they do not determine causation. For example, a social psychologist might see patterns of behavior through naturalistic observation, but would not be able to say why those patterns occurred. To tease apart the relationship between the variables, we turn to the experiment.
What goes into an experiment? In this lab, researchers are testing the effects of Ginkgo biloba on memory. Before getting into the experiment itself, there are important ethical concerns to keep in mind when conducting research, especially when human subjects are involved.
One of the big differences between studying rocks and people is that people need to be treated ethically. By that I mean that people have rights and needs that are more important than the needs of the scientist. So psychologists have a very strict code of behavior that governs what they may and may not do when they deal with other human beings as subjects.
For example, everyone who participates in experiment must provide informed consent. They must understand what's going to happen in the study and give their consent to participate.
Once we have our hypothesis— in this case, the hypothesis is Ginkgo biloba enhances memory— we need to set up the experiment so that we control all of the variables that we don't want to interfere with our study. For example, age or education level. This technique, called manipulation, allows us to isolate our one variable as the cause of any differences we see.
And so what researchers will want to do is separate out those confounded variables— those intertwined variables— to see what each one is doing when the others are held constant.
Manipulation of the study group means that the participants are the same on average. Depending on the study, it may be confined to certain factors, such as only college-age males or females. Or maybe more general, in which case a wide range of different-aged individuals from different backgrounds would be selected.
Next, the researchers would assign participants by chance to either the experimental group or the control group. This is called random assignment.
So if you're going to do a drug experiment, you would want to have at least two different groups— your experimental group that receives the drug and your control or comparison condition that does not. And what random assignment does is effectively equalize everything else. Those two groups will tend to be the same age, the same attitude, the same everything. Because that's what random assignment does.
And thus if they differ at the end of the day, it must have something to do with what was manipulated in that experiment. But of course, it's important that they not know which of those two groups that they are in, lest their expectations influence their behavior.
And likewise, that the person administering the drug not know and thus not be able to convey which is the actual drug and which is the inert drug, called the placebo. That's called the double blind procedure, because both the experimenter and the participant are blind as to who is getting the real drug.
Subjects take a pill. The Ginkgo biloba in the experimental group. The placebo pill in the control group. And then the testing begins.
Here, the dependent variable is the subject's performance on simple memory tasks. And the independent variable— the variable that is manipulated in this study— is whether people receive Ginkgo biloba or not.
If the experiment is designed properly and we are looking at the right variables, we should arrive at a conclusion. Or at least a conclusion that more work needs to be done to figure out causation.
When scientists complete an experiment, they publish their results. They tell other scientists about it. They analyze their data. They write up reports. And they publish them in scientific journals that other scientists read.
So if you've done an experiment and you've got a result, great. But what we really want to know— before I report it, for example, in my textbooks— is whether it's reliable. Whether it's repeatable. Whether other people can take that procedure and replicate it in their laboratory and get the same result.
As results are replicated again and again, the evidence gathered builds support for the original theory. Only after experiments have been proved repeatable are the lessons learned from the experiment accepted into the canon of psychology.