5.6 5

Summary of Concepts

LO 1     Define learning.

Learning is a relatively enduring change in behavior or thinking that results from experiences. Organisms as simple as fish and as complex as humans have the ability to learn. Learning is about creating associations. Sometimes we associate two different stimuli (classical conditioning). At other times we make connections between our behaviors and their consequences (operant conditioning), and we can also learn by watching and imitating others (observational learning), creating a link between our behavior and the behavior of others.

LO 2     Explain what Pavlov’s studies teach us about classical conditioning.

Studies such as those on Pavlov’s dogs teach us how organisms learn to respond. In Pavlov’s work, the dogs associated a variety of stimuli with the anticipation of food, which resulted in salivating. He discovered how associations develop through the process of learning, which he referred to as conditioning. Classical conditioning is the process in which two stimuli become associated with each other; an originally neutral stimulus is conditioned to elicit an involuntary response.

LO 3     Evaluate the differences between the US, UR, CS, and CR.

In classical conditioning, a neutral stimulus is something in the environment that does not normally cause a relevant automatic or reflexive response. This stimulus is repeatedly paired with an unconditioned stimulus (US) that results in an unconditioned response (UR). The neutral stimulus thus becomes a conditioned stimulus (CS) that the organism learns to associate with the US. The CS elicits a conditioned response (CR), which is the learned response an organism has to the CS. The initial pairing of a neutral stimulus with a US is called acquisition.

LO 4     Recognize and give examples of stimulus discrimination and stimulus generalization.

Once the association is forged between a CS and a CR, the learner often responds to similar stimuli as if they are the CS. This is called stimulus generalization. For example, someone who has been bitten by a small dog and reacts with fear to all dogs, big and small, demonstrates stimulus generalization. Stimulus discrimination is the ability to differentiate between the CS and other stimuli sufficiently different from it. Someone who was bitten by a small dog may only be afraid of small dogs, but not large dogs, thus demonstrating stimulus discrimination.

LO 5     Summarize how classical conditioning is dependent on the biology of the organism.

A conditioned taste aversion is a form of classical conditioning that occurs when an organism learns to associate the taste of a particular food or drink with illness. Avoiding foods that induce sickness has adaptive value, increasing the odds the organism will survive and reproduce, passing its genes along to the next generation. Animals and people show biological preparedness, or are predisposed or inclined to learn such associations.

LO 6     Evaluate the Little Albert study and explain how fear can be learned.

The classic Little Albert experiment illustrated the conditioned emotional response, an emotional reaction (fear in Little Albert’s case) acquired via classical conditioning. When Little Albert heard a loud bang, this was a US that elicited fear (the UR). Through conditioning, the sight of a rat became paired with the loud noise and went from being a neutral stimulus to a CS. Little Albert’s fear became a CR.

Page 227

LO 7     Describe Thorndike’s law of effect.

Thorndike’s law of effect was foundational in our understanding of operant conditioning, a type of learning in which people or animals come to associate their voluntary actions with the consequences of those actions. The law of effect states that if a behavior is followed by a pleasurable outcome, this increases the likelihood the behavior will occur again. Building on Thorndike’s law of effect and Watson’s behaviorism, Skinner used reinforcers to change behaviors through small steps toward a desired behavior.

LO 8     Explain shaping and the method of successive approximations.

Shaping allows us to reinforce each incremental change in behavior to accomplish a larger goal. When trying to shape animals’ behavior through these successive approximations, we may run into trouble with instinctive drift. Animal behavior can be shaped using successive approximations, but instinct can interfere with parts of the process.

LO 9     Identify the differences between positive and negative reinforcement.

Positive reinforcement refers to the process of applying reinforcers that increase future occurrences of a targeted behavior. The fish treats that Thorndike gave his cats are examples of positive reinforcers (strengthening the likelihood of the cats opening the latch). Behaviors can also increase in response to negative reinforcement through the process of taking away (or subtracting) something unpleasant. Putting on a seat belt in a car to stop an annoying beep is an example of negative reinforcement (strengthening the likelihood of wearing a seat belt). Both positive and negative reinforcement increase desired behaviors.

LO 10     Distinguish between primary and secondary reinforcers.

There are two major categories of reinforcers. Primary reinforcers satisfy biological needs. Food, water, and physical contact are considered primary reinforcers. Secondary reinforcers do not satisfy biological needs, but often derive their power from their connection with primary reinforcers. Money is an example of a secondary reinforcer; we know from experience that it gives us access to primary reinforcers, such as food, a safe place to live, and perhaps even the ability to attract desirable mates.

LO 11     Describe continuous reinforcement and partial reinforcement.

Reinforcers can be delivered on a constant basis (continuous reinforcement) or intermittently (partial reinforcement). Continuous reinforcement is generally more effective for establishing a behavior, whereas partial reinforcement is more resistant to extinction (the partial reinforcement effect) and useful for maintaining behavior.

LO 12     Name the schedules of reinforcement and give examples of each.

B. F. Skinner, a leading behaviorist whose research with “Skinner boxes” illuminated the principles of operant conditioning, described several types of partial reinforcement schedules. In a fixed-ratio schedule, the participant must exhibit a predetermined number of desired responses or behaviors before a reinforcer is given. In a variable-ratio schedule, the number of desired responses or behaviors that must occur before a reinforcer is given changes across trials and is based on an average number of responses to be reinforced. In a fixed-interval schedule, the reinforcer comes after a preestablished interval of time goes by; the response or behavior is only reinforced after the given interval is over. In a variable-interval schedule, the reinforcement comes after an interval of time goes by, but the length of the interval changes from trial to trial. The lengths of these intervals are within a predetermined range based on a desired average interval length.

LO 13     Explain how punishment differs from negative reinforcement.

In contrast to reinforcement, which makes a behavior more likely to recur, the goal of punishment is to decrease a behavior. Negative reinforcement differs from punishment because it strengthens a behavior that it follows by removing something aversive or disagreeable. Punishment decreases a behavior by instilling an association between a behavior and some unwanted consequence (for example, between stealing and going to jail, or between misbehaving and a spanking).

LO 14     Summarize what Bandura’s classic Bobo doll study teaches us about learning.

Watching a model demonstrate a behavior, observational learning can occur. Albert Bandura’s classic Bobo doll experiment showed that children readily imitate aggression when they see it modeled by adults. Studies suggest that children and adults may be inclined to mimic aggressive behaviors seen in TV shows, movies, video games, and on the Internet. Prosocial behaviors, on the other hand, can encourage kindness, generosity, and forms of behavior that benefit others.

Page 228

LO 15     Describe latent learning and explain how cognition is involved in learning.

Learning can occur without reinforcement. Edward Tolman showed that rats could learn to navigate mazes even when given no rewards. The animals developed cognitive maps, or mental images of the mazes, yet their learning only became apparent when it was needed (latent learning). The rats were learning without reinforcement, just for the sake of learning. This cognitive approach to learning reminds us that measurable behaviors and cognitive processes are necessary and complementary elements in the study of learning.

key terms

acquisition

adaptive value

behaviorism

biological preparedness

classical conditioning

cognitive map

conditioned emotional response

conditioned response (CR)

conditioned stimulus (CS)

conditioned taste aversion

continuous reinforcement

extinction

fixed-interval schedule

fixed-ratio schedule

habituation

higher order conditioning

instinctive drift

latent learning

law of effect

learning

model

negative punishment

negative reinforcement

neutral stimulus

observational learning

operant conditioning

partial reinforcement

partial reinforcement effect

positive punishment

positive reinforcement

primary reinforcer

prosocial behaviors

punishment

reinforcement

reinforcers

secondary reinforcer

shaping

spontaneous recovery

stimulus

stimulus discrimination

stimulus generalization

successive approximations

unconditioned response (UR)

unconditioned stimulus (US)

variable-interval schedule

variable-ratio schedule

TEST PREP  are you ready?

Question

1. One basic form of learning occurs during the process of ___________, which is evident when an organism does not respond as strongly or as often to an event following multiple exposures to it.

A.
B.
C.
D.

c. conditioned emotional response.

Question

2. Even trout can learn through operant conditioning, as evidenced by their

A.
B.
C.
D.

d. The law of effect

Question

3. The behaviors learned with classical conditioning are ___________, whereas those learned with operant conditioning are ___________.

A.
B.
C.
D.

c. A dog whining in the morning, leading an owner to wake up and take it outside

Question

4. Every time you open the pantry where dog food is stored, your dog starts to salivate. His reaction to your opening the pantry door is a(n)

A.
B.
C.
D.

a. positive reinforcement.

Question

5. Your first love wore a musky-scented perfume, making your heart race every time he or she appeared. Even now when you smell that scent, your heart speeds up, suggesting the scent is a(n)

A.
B.
C.
D.

b. positive reinforcement.

Question

6. Avoiding foods that induce sickness have ___________. This taste aversion helps organisms survive.

A.
B.
C.
D.

a. were more likely to imitate the adult’s aggressive behavior.

Question

7. Little Albert was an 11-month-old baby who originally had no fear of rats. In an experiment conducted by Watson and Rayner, he was classically conditioned to fear white rats through the pairing of a loud noise with exposure to a rat. His resulting fear is an example of

A.
B.
C.
D.

d. at increased risk of abusing their spouses when they become adults.

Page 229

Question

8. ___________ indicates that if a behavior is followed by a pleasurable outcome, it likely will be repeated.

A.
B.
C.
D.

a. latent learning.

Question

9. Which of the following is an example of negative reinforcement?

A.
B.
C.
D.

b. insight.

Question

10. All your friends tell you that you look fabulous in your new jeans, so you start wearing them all the time. This is an example of

A.
B.
C.
D.

Differences: In classical conditioning the learned behaviors are involuntary, whereas in operant conditioning, the learned behaviors are voluntary. Classical conditioning links different stimuli, and operant conditioning links behaviors to their consequences. Shared characteristics: They both involve forming associations. They both have an acquisition phase, and in both cases extinction of the learned behavior can occur. (See Table 5.2 and Figure 5.4.)

Question

11. A child is reprimanded for misbehaving. Following this, the child seems to misbehave even more! This indicates that reprimanding the child was

A.
B.
C.
D.

Biological preparedness is a biological predisposition or inclination of animals to form associations quickly through classical conditioning. If an animal eats something and then gets sick, it is unlikely to eat that food again, which could influence the survival of the animal.

Question

12. In Bandura’s Bobo doll study, children who saw an adult attacking and shouting at the doll:

A.
B.
C.
D.

Answers will vary, but can be based on the following definitions. Primary reinforcers are reinforcers that satisfy biological needs, such as food, water, or physical contact. Secondary reinforcers do not satisfy biological needs, but often gain their power through their association with primary reinforcers. A primary reinforcer used to change behavior might be food. A college tries to increase student participation by providing food at important school functions. Money can be used as a secondary reinforcer. Employees are paid money, which increases attendance at work.

Question

13. Children who watch TV programs with violent role models are:

A.
B.
C.
D.

Answers will vary, but can be based on the following definitions. Punishment decreases the likelihood of the behavior it follows. On the other hand, negative reinforcement increases the likelihood of a behavior recurring. See Table 5.3 for examples.

Question

14. Rats allowed to explore a maze, without getting reinforcers until the 11th day of the experiment, subsequently behaved in the maze as if they had been given reinforcers throughout the entire experiment. Their behavior is evidence of

A.
B.
C.
D.

Answers can vary. The studies that have found an association between violent films and violent behaviors are correlational: They can only highlight a link between the films and the behaviors, not a cause-and-effect relationship. There are other factors that have to be considered. For example, parenting could influence both television viewing and aggression. A parent who is emotionally neglectful may place a child in front of a television all day, such that the child is more likely to imitate some aggressive behaviors seen on the TV. Simultaneously, the child may resent her parent for ignoring her, and the resentment could lead to aggressive behavior. In this example, is it the television programming or the parenting that is leading to aggression?

Question

15. Wolfgang Köhler’s research on chimpanzees suggests that animals are capable of thinking through a problem before taking action, and having a sudden coming together of awareness of a situation, leading to the solution of a problem. This is called:

A.
B.
C.
D.

Question

16. How are classical conditioning and operant conditioning different from each other? What characteristics do they share?

Question

17. Why is biological preparedness important in the life of animals?

Question

18. What is the difference between primary reinforcers and secondary reinforcers? Give an example of each and how they might be used to change a behavior.

Question

19. How are punishment and negative reinforcement different? Give examples of negative reinforcement, positive punishment, and negative punishment and the goal of each for changing behavior.

Question

20. Some studies show that watching violent films is associated with violent behaviors. Why are such studies unable to show a cause-and-effect relationship between a film’s content and viewers’ behavior?

Get personalized practice by logging into LaunchPad at http://www.worthpublishers.com/launchpad/sciam1e
to take the LearningCurve adaptive quizzes for Chapter 5.

[Leave] [Close]