Behaviorism
Behaviorism emerged in the early 1900s as a reaction to depth psychology and other traditional forms of psychology, which often had difficulty making predictions that could be tested experimentally, but derived from earlier research in the late nineteenth century, such as the law of effect, a procedure that involved the use of consequences to strengthen or weaken behavior.
Behaviorism (also spelled behaviourism) is a systematic approach to understand the behavior of humans and other animals. It assumes that behavior is either a reflex evoked by the pairing of certain antecedent stimuli in the environment, or a consequence of that individual's history, including especially reinforcement and punishment contingencies, together with the individual's current motivational state and controlling stimuli. Although behaviorists generally accept the important role of heredity in determining behavior, they focus primarily on environmental events. The cognitive revolution of the late 20th century largely replaced behaviorism as an explanatory theory with cognitive psychology, which unlike behaviorism examines internal mental states.
Behaviorism emerged in the early 1900s as a reaction to depth psychology and other traditional forms of psychology. With a 1924 publication, John B. Watson devised methodological behaviorism, which rejected introspective methods and sought to understand behavior by only measuring observable behaviors and events. It was not until the 1930s that B. F. Skinner suggested that covert behavior—including cognition and emotions—is subject to the same controlling variables as observable behavior, which became the basis for his philosophy called radical behaviorism. While Watson and Ivan Pavlov investigated how (conditioned) neutral stimuli elicit reflexes in respondent conditioning, Skinner assessed the reinforcement histories of the discriminative (antecedent) stimuli that emits behavior; the technique became known as operant conditioning.
The titles given to the various branches of behaviorism include:
- Behavioral genetics: Proposed in 1869 by Francis Galton, a relative of Charles Darwin. Galton believed that inherited factors had a significant impact on individuals' behaviors, however did not believe nurturing was not important. Which was later discredited due to association with the eugenics movement - researchers did not want to associate with Nazi politics whether direct or indirect.
- Interbehaviorism: Proposed by Jacob Robert Kantor before B. F. Skinner's writings.
- Methodological behaviorism: John B. Watson's behaviorism states that only public events (motor behaviors of an individual) can be objectively observed. Although it was still acknowledged that thoughts and feelings exist, they were not considered part of the science of behavior. It also laid the theoretical foundation for the early approach behavior modification in the 1970s and 1980s. Often compared to the views of B.F Skinner (radical behaviorism). Methodological behaviorism “representing the logical positivist-derived philosophy of science” which is common in science today, radical focuses on the “pragmatist perspective."
- Psychological behaviorism: As proposed by Arthur W. Staats, unlike the previous behaviorisms of Skinner, Hull, and Tolman, was based upon a program of human research involving various types of human behavior. Psychological behaviorism introduces new principles of human learning. Humans learn not only by animal learning principles but also by special human learning principles. Those principles involve humans' uniquely huge learning ability. Humans learn repertoires that enable them to learn other things. Human learning is thus cumulative. No other animal demonstrates that ability, making the human species unique.
- Radical behaviorism: Skinner's philosophy is an extension of Watson's form of behaviorism by theorizing that processes within the organism—particularly, private events, such as thoughts and feelings—are also part of the science of behavior, and suggests that environmental variables control these internal events just as they control observable behaviors. Behavioral events may be observable but not all are, some are considered “private”: they are accessible and noticed by only the person who is behaving. B.F. Skinner described behavior as the name for the part of the functioning of the organism that consists of its interacting or having commerce with its surrounding environment. In simple terms, how an individual interacts with its surrounding environment. Although private events cannot be directly seen by others, they are later determined through the species' overt behavior. Radical behaviorism forms the core philosophy behind behavior analysis. Willard Van Orman Quine used many of radical behaviorism's ideas in his study of knowledge and language.
- Teleological behaviorism: Proposed by Howard Rachlin, post-Skinnerian, purposive, close to microeconomics. Focuses on objective observation as opposed to cognitive processes.
- Theoretical behaviorism: Proposed by J. E. R. Staddon, adds a concept of internal state to allow for the effects of context. According to theoretical behaviorism, a state is a set of equivalent histories, i.e., past histories in which members of the same stimulus class produce members of the same response class (i.e., B. F. Skinner's concept of the operant). Conditioned stimuli are thus seen to control neither stimulus nor response but state. Theoretical behaviorism is a logical extension of Skinner's class-based (generic) definition of the operant. Two subtypes of theoretical behaviorism are:
- Hullian and post-Hullian: theoretical, group data, not dynamic, physiological
- Purposive: Tolman's behavioristic anticipation of cognitive psychology
B. F. Skinner proposed radical behaviorism as the conceptual underpinning of the experimental analysis of behavior. This viewpoint differs from other approaches to behavioral research in various ways, but, most notably here, it contrasts with methodological behaviorism in accepting feelings, states of mind and introspection as behaviors also subject to scientific investigation. Like methodological behaviorism, it rejects the reflex as a model of all behavior, and it defends the science of behavior as complementary to but independent of physiology. Radical behaviorism overlaps considerably with other western philosophical positions, such as American pragmatism.
Although John B. Watson mainly emphasized his position of methodological behaviorism throughout his career, Watson and Rosalie Rayner conducted the infamous Little Albert experiment (1920), a study in which Ivan Pavlov's theory to respondent conditioning was first applied to eliciting a fearful reflex of crying in a human infant, and this became the launching point for understanding covert behavior (or private events) in radical behaviorism. However, Skinner felt that aversive stimuli should only be experimented on with animals and spoke out against Watson for testing something so controversial on a human.
In 1959, Skinner observed the emotions of two pigeons by noting that they appeared angry because their feathers ruffled. The pigeons were placed together in an operant chamber, where they were aggressive as a consequence of previous reinforcement in the environment. Through stimulus control and subsequent discrimination training, whenever Skinner turned off the green light, the pigeons came to notice that the food reinforcer is discontinued following each peck and responded without aggression. Skinner concluded that humans also learn aggression and possess such emotions (as well as other private events) no differently than do nonhuman animals.
Operant conditioning was developed by B.F. Skinner in 1938 and is form of learning in which the frequency of a behavior is controlled by consequences to change behavior. In other words, behavior is controlled by historical consequential contingencies, particularly reinforcement—a stimulus that increases the probability of performing behaviors, and punishment—a stimulus that decreases such probability. The core tools of consequences are either positive (presenting stimuli following a response), or negative (withdrawn stimuli following a response).
The following descriptions explains the concepts of four common types of consequences in operant conditioning:
- Positive reinforcement: Providing a stimulus that an individual enjoys, seeks, or craves, in order to reinforce desired behaviors. For example, when a person is teaching a dog to sit, they pair the command "sit" with a treat. The treat is the positive reinforcement to the behavior of sitting. The key to making positive reinforcement effect is to reward the behavior immediately.
- Negative reinforcement: Increases the frequency of a behavior, but the behavior results from removing unpleasant or unwanted stimulus. For example, a child hates being nagged (negative) to clean his room (behavior) which increases the frequency of the child cleaning his room to prevent his mother from nagging. Another example would be putting on sunscreen (behavior) before going outside to prevent sunburn (negative).
- Positive punishment: Providing a stimulus that an individual does not desire to decrease undesired behaviors. For example, if a child engages in an undesired behavior, then parents may spank (stimulus) the child to correct their behavior.
- Negative punishment: Removing a stimulus that an individual desires in order to decrease undesired behaviors. An example of this would be grounding a child for failing a test. Grounding in this example is taking away the child's ability to play video games. As long as it is clear that the ability to play video games was taken away because they failed a test, this is negative punishment. The key here is the connection to the behavior and the result of the behavior.
A classical experiment in operant conditioning, for example, is the Skinner Box, "puzzle box" or operant conditioning chamber to test the effects of operant conditioning principles on rats, cats and other species. From this experiment, he discovered that the rats learned very effectively if they were rewarded frequently with food. Skinner also found that he could shape (create new behavior) the rats' behavior through the use of rewards, which could, in turn, be applied to human learning as well.
Skinner's model was based on the premise that reinforcement is used for the desired actions or responses while punishment was used to stop the responses of the undesired actions that are not. This theory proved that humans or animals will repeat any action that leads to a positive outcome, and avoid any action that leads to a negative outcome. The experiment with the pigeons showed that a positive outcome leads to learned behavior since the pigeon learned to peck the disc in return for the reward of food.
These historical consequential contingencies subsequently lead to (antecedent) stimulus control, but in contrast to respondent conditioning where antecedent stimuli elicit reflexive behavior, operant behavior is only emitted and therefore does not force its occurrence. It includes the following controlling stimuli:
- Discriminative stimulus (Sd): An antecedent stimulus that increases the chance of the organism engaging in a behavior. One example of this occurred in Skinner's laboratory. Whenever the green light (Sd) appeared, it signaled the pigeon to perform the behavior of pecking because it learned in the past that each time it pecked, food was presented (the positive reinforcing stimulus).
- Stimulus delta (S-delta): An antecedent stimulus that signals the organism not to perform a behavior since it was extinguished or punished in the past. One notable instance of this occurs when a person stops their car immediately after the traffic light turns red (S-delta). However, the person could decide to drive through the red light, but subsequently receive a speeding ticket (the positive punishing stimulus), so this behavior will potentially not reoccur following the presence of the S-delta.
Although operant conditioning plays the largest role in discussions of behavioral mechanisms, respondent conditioning (also called Pavlovian or classical conditioning) is also an important behavior-analytic process that needs not refer to mental or other internal processes. Pavlov's experiments with dogs provide the most familiar example of the classical conditioning procedure. In the beginning, the dog was provided meat (unconditioned stimulus, UCS, naturally elicit a response that is not controlled) to eat, resulting in increased salivation (unconditioned response, UCR, which means that a response is naturally caused by UCS).
Afterward, a bell ring was presented together with food to the dog. Although bell ring was a neutral stimulus (NS, meaning that the stimulus did not have any effect), dog would start to salivate when only hearing a bell ring after a number of pairings. Eventually, the neutral stimulus (bell ring) became conditioned. Therefore, salivation was elicited as a conditioned response (the response same as the unconditioned response), pairing up with meat—the conditioned stimulus) Although Pavlov proposed some tentative physiological processes that might be involved in classical conditioning, these have not been confirmed. The idea of classical conditioning helped behaviorist John Watson discover the key mechanism behind how humans acquire the behaviors that they do, which was to find a natural reflex that produces the response being considered.
Watson's "Behaviourist Manifesto" has three aspects that deserve special recognition: one is that psychology should be purely objective, with any interpretation of conscious experience being removed, thus leading to psychology as the "science of behaviour"; the second one is that the goals of psychology should be to predict and control behaviour (as opposed to describe and explain conscious mental states); the third one is that there is no notable distinction between human and non-human behaviour. Following Darwin's theory of evolution, this would simply mean that human behaviour is just a more complex version in respect to behaviour displayed by other species.