Once This Kind of Continuous Reinforcement is Discontinued We Can Predic
Intermittent Reinforcement
However, intermittent reinforcement of the kind used in schedules of reinforcement does have one important quality—it produces robust responding that is significantly more resistant to extinction than when continuous reinforcement is used (cf. Ferster & Skinner, 1957).
From: Comprehensive Clinical Psychology , 1998
Skill Acquisition
Jonathan Tarbox , Courtney Tarbox , in Training Manual for Behavior Technicians Working with Individuals with Autism, 2017
5.15.1 Intermittent Reinforcement
As described earlier in this section, intermittent reinforcement is when only some (i.e., not all) occurrences of a behavior are reinforced. Most of our everyday ongoing behavior is on some kind of intermittent schedule of reinforcement, so we need to teach the learners with ASD with whom we work to adjust to and rely upon intermittent reinforcement. To use intermittent reinforcement for encouraging maintenance, first check the maintenance section of the skill acquisition plan or check any existing separate maintenance plan. Generally speaking, reinforcement should be gradually thinned from continuous reinforcement during skill acquisition to reinforcing every other correct response, to reinforcing every two to three correct responses on a variable and unpredictable schedule.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128094082000052
Technology in Clinical Psychology
Theresa Fleming , ... Liesje Donkin , in Comprehensive Clinical Psychology (Second Edition), 2022
10.05.2.1.4 Gamification
Gamification refers to the application of features from gaming, such as leader boards, competition and intermittent reinforcement, for non-game purposes ( Johnson et al., 2016; Robson et al., 2015; Sailer and Homner, 2019). Gamification is used in applications from shopping loyalty stamp cards (Aziz et al., 2017) to step counters, which involve mechanisms common to gaming such as leaderboards, competition, measurable achievements and rewards. Happify (Happify Inc., 2020) and SuperBetter (SuperBetter Inc., 2020) are examples of publicly available mental health programs that use gamification extensively.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012818697800011X
Foundations
Graham C.L. Davey , in Comprehensive Clinical Psychology, 1998
(iv) Schedules of reinforcement
One of the important features of operant conditioning is that not every instance of a response needs to be reinforced in order to increase the frequency of the response. Intermittent reinforcement of this kind can be programmed on what are known as schedules of reinforcement, with responses being reinforced on the basis of time (e.g., time since the last reinforcement) or number (e.g., every nth response is reinforced). Basic schedules of reinforcement generate characteristic patterns of behavior which it is not necessary to elaborate on here (see Davey, 1989b; Ferster & Skinner, 1957).
However, intermittent reinforcement of the kind used in schedules of reinforcement does have one important quality—it produces robust responding that is significantly more resistant to extinction than when continuous reinforcement is used (cf. Ferster & Skinner, 1957). This is an important consideration when behavior modifiers are attempting to develop patterns and repertoires of behavior that will survive beyond the therapeutic conditions in which they are initially reinforced.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0080427073001814
The Professional Relationship
Timothy A. Storlie PhD , in Person-Centered Communication with Older Adults, 2015
Deepening Rapport
There are many ways to deepen rapport. Probably the most common way simply involves the passage of time combined with numerous encounters and pleasant interactions. This is an example of a powerful learning principle in action—spaced repetition with intermittent reinforcement over time. In other words, the provider and older adult have a number of interactions over a period of weeks or months (spaced repetition) that, for the most part, result in mutually satisfying interactions (intermittent reinforcement). This particular pattern of "exposure" (contact) to a "situation" (such as a visit to physician) and reward often results in strong positive behavioral conditioning (a good feeling by the person) and offers one explanation for how friendships develop (the patient feels good about returning to the doctor). The identification of the constructive effect of spaced repetition and intermittent positive reinforcement on the learning process is attributed to an observation made by Hermann Ebbinghaus in the 1800s (Ebbinghaus, 1913).
To deepen rapport, the provider can express a genuine interest in learning more about the older adult—about what is important to him or her. The emphasis is upon the provider making a greater effort to better understand the older adult, not the older adult understanding the provider.
Another suggestion is to listen carefully to the manner of speaking used by the older adult and, if and when appropriate, to slightly reflect some of the noted characteristics. This extremely subtle process of imitating is often referred to as mirroring.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124201323000039
Evolving Ideas About Human Learning
Noel Entwistle , in Student Learning and Academic Understanding, 2018
Behaviorism
When I started educational research in the mid-1960s, the dominant theories of learning were still those of the behaviorists, who saw the role of the teacher as no more than the shaping of correct responses by students through intermittent reinforcement. Much earlier, Thorndike had identified two "laws of learning," which he believed would explain learning in all animals, including humans—the law of effect and the law of exercise, which Skinner (1954) elaborated into a set of four general conditions for improving learning. These can be summarized as:
- 1.
-
Practice of the correct responses (law of exercise);
- 2.
-
knowledge of the results and reinforcement of the right answer (law of effect);
- 3.
-
minimum delay in reinforcement; and
- 4.
-
successive small steps with hints to ensure answers will generally be right.
This summary is based on an influential article by McKeachie (1974) entitled "The decline and fall of the laws of learning." He argued that these "laws" could not be applied in any sensible way for teaching in school, although the underlying principles could still be used effectively in other areas of human behavior. Incorporating these principles within school teaching, and through programmed learning using teaching machines, and also in the early stages of computer-based learning, could ensure that right answers were highly probable. But only at the cost of minimal opportunities for developing personal understanding, as McKeachie indicated:
Some of the problems in trying to apply the laws of learning to educational situations have been the failure to take account of the differences between humans and other animals – e.g. [the] greater ability to conceptualize, relate, and remember. Other problems have simply derived from failure to take account of important variables controlled in the laboratory situations but interacting with independent variables in natural educational settings. …It may be that [the laws of learning] have application in other restricted situations, but meaningful educational learning is both more robust and more complex. This complexity, so frustrating to those who wish to prescribe educational methods, is a reminder of the fascinating uniqueness of the learner. …It is this that makes education both endlessly challenging and deeply humane.
(McKeachie, 1974, pp. 8–10)
Behaviorist theories treated the mind as a "black box" that couldn't be explored by existing experimental methods and measurement procedures; but once that "embargo" was lifted, psychology became more directly relevant to education by focusing on distinctly human characteristics and on the learning processes involved in classroom activities. One such approach was research into human memory, which sought to explain how information coming into the brain was processed into memory and stored there.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128053591000024
Caregiver Training and Follow-Up
Jonathan Tarbox , Taira Lanagan Bermudez , in Treating Feeding Challenges in Autism, 2017
8.3.2.2 Programing for Maintenance
Many of the same procedures that are useful for programing for generalization are also useful for programing for maintenance because maintenance can actually be thought about as a special kind of generalization: generalization across time. We will not repeat our recommendations regarding generalization here, but we encourage you to look over the generalization procedures described in that section when programing for maintenance. Two other variables are also worth considering: (1) reinforcement history and (2) intermittent reinforcement.
Research from the behavioral momentum literature suggests that behaviors that have earned more reinforcement in the past are more resistant to change than behaviors that have earned less reinforcement (Nevin & Shahan, 2011). Although most of this research has been basic laboratory research, it has important applied implications. What it suggests is that it may be very important to make sure that the desirable behavior change that you have produced has received a lengthy and rich history of reinforcement before fading treatment out and hoping that it maintains. So, all other things being equal, more successful treatment meals containing rich reinforcement for varied and flexible eating are better than fewer, before considering when to fade out your intervention.
Intermittent reinforcement is a commonly used strategy to promote maintenance of behavior change. Earlier phases of treatment meals often reinforce every occurrence of eating nonpreferred foods, that is, they implement positive reinforcement on an FR1 schedule. This is an effective approach to intervention and it helps build the rich reinforcement history for eating that the behavioral momentum literature suggests is necessary. However, before expecting improved eating to maintain, you should consider thinning the reinforcement schedule. After FR1 positive reinforcement has been implemented for a substantial amount of time, you could transition to an FR2 schedule, where the client needs to eat two consecutive bites of nonpreferred food before earning a positive reinforcer. You could then thin the schedule of reinforcement to FR3, FR4, FR5, and so on, until you reach a leaner schedule. You may also choose to switch to a variable ratio (VR) schedule, where the exact number of bites required for reinforcement is unpredictable but averages around some specific number. For example, you might implement something along the lines of the following schedule: FR1, FR2, VR2, VR3, VR5, VR7, VR10, VR15, and then VR20. Such reinforcement schedules probably resemble reinforcement contingencies in the real world and they may be especially effective for encouraging maintenance. You can also use the same schedule-thinning strategy to gradually increase the number of nonpreferred bites required to be eaten until the meal is terminated (Fig. 8.1).
Regardless of how you choose to thin your reinforcement schedule, remember that you should only thin it further when the client's feeding is successful and stable at the current parameter. There are no strict rules for how to progress through schedule thinning. The faster you thin, the more likely the client is to be unsuccessful, and the slower you thin, the longer the process will take. The process should be governed by visual inspection of the data after every meal.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128135631000082
Sexual disorders
Joseph J. Plaud , in Functional Analysis in Clinical Treatment (Second Edition), 2020
Operant conditioning
Skinner (1969, 1988) argued that past and contingencies of survival and past and present contingencies of reinforcement shape sexual behavior. By "contingencies of survival", Skinner referred to the selective action of the environment on the species' gene pools. In Skinner's behavioral account, sexual contact has come to function as a powerful primary reinforcer through the contingencies of survival. Skinner also argued that the ontogeny and phylogeny of sexual behavior are closely related. According to Skinner, the contingencies of survival shaped global behavioral patterns exhibited by different organisms. However, natural selection is also complemented during the organism's lifetime through selection of behavior by its consequences. The capacity to learn through operant conditioning was an adaptive mutation. Organisms that were responsive to immediate environmental consequences that biological evolution could not prepare them for survived temporally unstable shifts in prevailing environmental features. Skinner termed this selection of behavior Skinner operant conditioning and the behaviors selected through consequences were called operants.
While Skinner focused only on natural selection in the evolution of human sexuality, Darwin (1871) suggested that evolution operated through two selection mechanisms: natural selection and sexual selection. According to Darwin (1871), sexual selection involves male competition for females and female choice among males. Darwin thought that the "law of battle" or male competition for mates accounted for the evolution of male aggressiveness, the male's greater size, and in some species the male's anatomical weapons for fighting. Furthermore, Darwin believed that females would choose males based on factors such as quality of their adornment, their courtship display, and the quality of the resources they control. Sexual selection is currently an area of intense interest among evolutionary biologists and behavioral geneticists and ought to be considered in any account of the phylogenetic heritage of an organism (O'Donohue & Plaud, 1994; Puts, 2016).
A central question in operant studies of sexual arousal is whether operant or classical conditioning has been demonstrated. O'Donohue and Plaud (1994) used Millenson and Leslie's (1979) criteria to evaluate studies claiming to demonstrate operant conditioning of sexual behavior. These criteria were: (1) breaking of the contingency produces a short-term extinction burst in operant, but not classical conditioning; (2) intermittent reinforcement produces greater resistance to extinction in operant conditioning, but this effect is not seen in classical conditioning; (3) complex skeletal behavior involving striated muscle is readily conditioned in operant, but not classical conditioning; (4) autonomic behavior is readily conditioned in classical, but not operant conditioning; (5) the conditioned response is not usually a component of behavior elicited by the reinforcer in operant conditioning, in contrast to classical conditioning; and (6) in operant but not classical conditioning the experimenter usually specifies the nature of the conditioned response within the often broad constraints of biological preparedness.
Quinn, Harbisan, and McAllister (1970) studied operant conditioning of sexual arousal with one homosexual subject. They placed the subject on a water-deprivation schedule in which presentation of liquid, its delivery signaled by a light, was contingent on increases in penile tumescence to a slide of an adult female. In contrast to baseline, the subject emitted increased levels of tumescence when tumescence (operant response) to a slide of a female adult (discriminative stimulus) was explicitly reinforced by a cold lime juice concentration. Although this study found a direct contingent relationship between tumescence and consequences, the researchers did not attempt to break the contingency in order to investigate whether an extinction burst would occur and they did not investigate whether penile tumescence was sensitive to the effects of intermittent reinforcement.
Using an aversive conditioning procedure, Rosen and Kopel (1977) scheduled a contingent relationship between penile tumescence and the loudness of an alarm clock buzzer. As penile tumescence increased to video stimuli of the subject's initial sexual preference, the alarm sound's loudness also increased. They found a reduction in tumescence while this contingency was in effect. Different schedules of punishment were not used, and the potential physiological effects of fatigue or habituation were not ruled out.
Rosen, Shapiro, and Schwartz (1975) found that subjects demonstrated increased penile tumescence to a discriminative stimulus when monetarily reinforced for such responding. In contrast a yoked control group, which received non-contingent reinforcement in the presence of the same discriminative stimulus, showed no increased penile tumescence to the same stimulus. However, Rosen et al. did not differentiate between contingent versus non-contingent reinforcement. Further, they also and did not investigate the direct effects of intermittent reinforcement.
Schaefer and Colgan (1977) also studied whether sexual reinforcement increased penile responding. The researchers used sexually explicit scripts and neutral scripts which were read by the subjects. They found that penile responding continued to increase over trials when the sexually explicit scripts were followed by sexual reinforcement, a sexually explicit stimulus. The control subjects demonstrated decreased responding over trials. Cliffe and Parry (1980) conducted the most direct study of the operant conditionability of sexual arousal by using sexual stimuli to test the matching law (Plaud, 1992). Originally formulated by Herrnstein (1970), the matching law predicts that when concurrent schedules of reinforcement are in effect, there exists a one-to-one or matching relation between relative overall number of responses and the overall relative number of reinforcement presentations. A variable-interval schedule is one in which a reinforcer is presented after the first response that occurs after a variable amount of time has passed since the previously reinforced response. Concurrent schedules exist when two or more schedules are simultaneously in effect. Cliffe and Parry studied the sexual behavior of a male pedophile. Three concurrent variable-interval schedules were available. The first concurrent choice involved pressing keys to view either slides of women or men. The second concurrent choice was between slides of men or children. The third concurrent choice was between slides of women or children. The matching law accurately described the subject's behavior in all three conditions. Thus, these studies found support for the operant conditioning of sexual arousal. However, like classical conditioning, the data base at present is limited by the methodologies employed. Further, there exists a paucity of operant studies of female sexual arousal. Sexually violent or sadistic behavior has also been hypothesized to involve early developmental pairings of sexual urges and arousal with aggressive stimuli, which are then reinforced and maintained through sexual fantasizing and orgasm through masturbation (Yates, Hucker, & Kingston, 2008).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128054697000176
Case Conceptualization and Treatment: Children and Adolescents
Christina M. Low Kapalu , Edward Christophersen , in Comprehensive Clinical Psychology (Second Edition), 2022
5.23.2.11.2 Mental/Behavioral Health Treatment of Diurnal Enuresis
For enuresis, the AACAP (2004) recommended, as the first line of treatment for enuresis with motivated parents, the use of a battery-operated bedwetting alarm, along with a written contract, thorough instructions, frequent monitoring (by phone follow up or repeat office visits), thorough instruction, and intermittent reinforcement before discontinuation of the alarm. The Practice Parameter did not distinguish between night wetting and day wetting. Despite this recommendation, few medical providers are aware of this intervention or recommend it to families ( Christophersen, 2005). And, similar to night wetting, a history of a variety of forms of previous punishment is common in day wetters and must be discouraged. This recommendation applies to not only all forms of physical punishment but also verbal punishment, in the form of verbal lectures and berating, and the loss of privileges. In addition, the caregivers, teachers, and close relatives should be provided with psychoeducation so that they understand that day wetting is usually not under the child's control (Christophersen and Friman, 2010).
Friman and Vollmer (1995) reported on the effects of using a urine alarm to treat chronic day wetting. They reported, using an ABAB reversal design, that the alarm eliminated wetting and that continence was maintained at 3- and 6- month follow up. Halliday et al. (1987) reported on their research with 44 children, between 5 and 15 years of age. They completed a randomized control trial of two alarm devices to treat day wetting. The contingent alarm, sounded when wetting occurred and the non-contingent alarm sounded intermittently and unrelated to wetting. Two-thirds of the patients responded to the alarm by becoming dry, regardless of which alarm group they were in. Twenty-three percent of the patients who responded to the alarm relapsed up to two years after completion of the alarm treatment.
Hagstroem et al. (2010) conducted a randomized control trial comparing standard urotherapy to a programmable timer watch to treat urge incontinence and voiding postponement. Watches were used to prompt timed voiding throughout the day. The authors found that when the timer was used in addition to standard urotherapy, up to 60% of children were able to achieve and maintain dryness for 7 months post intervention compared only 5% of controls.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128186978000662
Assessing and Diagnosing Posttraumatic Stress Disorder
Sharon L. Johnson , in Therapist's Guide to Posttraumatic Stress Disorder Intervention, 2009
FUNCTIONAL IMPAIRMENT AS A CONSEQUENCE OF CHRONIC DOMESTIC TRAUMA
There has been much research and many publications on the impact of trauma associated with child sexual abuse and children living in a chronically traumatizing environment with domestic violence. In addition, many adults experience the consequences of the socially significant and chronically traumatizing life experience of domestic violence.
Relational distubance has been of interest to many researchers, particularly domestic violence. Dutton and colleagues (Dutton and Painter, 1981, 1993; Dutton, 2008) explored the topic of what was referred to as Domestic Stockholm Syndrome and a traumatic bonding model with the following characteristics:
- •
-
Strong emotional ties develop in the context of intermittent marital abuse.
- •
-
The majority of battered women (87%) have not been abused in previous relationships.
- •
-
There are unmet dependency needs of both partners.
- •
-
Two common features are power imbalance and intermittent reinforcement. "When the physical punishment is administered at intermittent intervals, and when it is interspersed with permissive and friendly contact, the phenomenon of 'traumatic bonding' seems most powerful" ( Dutton and Painter, 1981, p. 149).
- •
-
It results in a strong emotional attachment or trauma bond. Strong emotional ties between two people where one person intermittently traumatizes the other. For example, harasses, beats, threatens, abuses, or intimidates the other).
- •
-
There are cognitive changes such as introjection of self-blame and lowered self-esteem.
- •
-
The attachment bond can be described by an "elastic band metaphor." They pull or stretch away from the abuser and return to the known quantity, altering their memory for the past abuse and the perceived likelihood of future abuse in the relationship. It is difficult to leave—they may be isolated, have few if any resources, fear they are not capable of successfully living independently, etc.
Additional features include the following:
- •
-
In approximately 71% of all violent couple fights, women initiate the first violent act (this is a controversial statistic).
- •
-
Male to female acts of violence are approximately six times more likely to cause injuries to the woman, more health problems, stress, depression, and psychosomatic symptoms.
- •
-
A critical difference between men and women in domestic violence is that men are motivated to use violence as a means to terrorize and victimize their partners (i.e. violence is used as a means of controlling or dominating their partner), whereas women tend to use violence as an expression of frustration or self-defense.
Lawson et al. (2003)
- •
-
Approximately 1300 women and 800 men are killed each year by partners
- •
-
Once violence is initiated it does not cease without some type of intervention
- •
-
Previous violence by a partner has a 46–72% probability of predicting future violence.
Hedtke et al. (2008)
- •
-
Lifetime violence exposure is associated with increased risk of PTSD (and other mental health problems like depression and substance use disorders)
- •
-
The role of PTSD, depression, and substance use disorders increases incrementally with the number of different types of violence experience
- •
-
New incidents of violence between the baseline and follow-up interviews were associated with an escalated risk of PTSD and substance use disorders.
Montero (2000) states that the imbalance of power is not a consequence but an antecedent of the abuse. The trauma bond protects the victim's psychological integrity. Five stages in the development of the cognitive bond have been described by Montero (2000):
- 1.
-
Trigger—initial physical abuse breaks the previous beliefs and security in the relationship. There is disorientation and an acute stress reaction
- 2.
-
Reorientation—involving cognitive dissonance between abuse evidence and continuing to go along with the relationship (between intermittent abuse episodes), cognitive restructuring to decrease dissonance, and thoughts of self-blame
- 3.
-
Coping—managing the abuse potential
- 4.
-
Adaptation—an assumption of the abuser's beliefs and projection of guilt outside the couple's relationship
- 5.
-
Full emergence of Domestic Stockholm Syndrome.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123748515000019
Experimental Analysis of Behavior, Part 2
Iver H. Iversen , in Techniques in the Behavioral and Neural Sciences, 1991
4.4.2 Differentiation of beak separation in pigeons
An important question regarding generality of response differentiation would be to determine whether scale of analysis matters. The spatial domain of peck location can be considered wide because the whole body has to move. A much narrower scale would be differentiation of individual response components. A given response instance can be considered an assembly of discrete components rather than a unitary response. For example, the pigeon's key peck may be analyzed into locomotion, head transport, and beak separation. These components are mediated by different effector systems and therefore might be dissociated experimentally. To determine if response differentiation holds at this much smaller scale of analysis, so that one component may be conditioned to change while the others remain the same, Deich et al. (1988) measured beak separation and key contact (pecking) separately. Beak separation is anatomically different from key contact because beak separation is not mechanically constrained by key contact (i.e., the key can be contacted with open or closed beak). Fig. 12A depicts Deich et al.'s technique. A Hall-effect device, which outputs a voltage corresponding to the strength of an applied magnetic field, was fixed on the upper beak and a small magnet under the lower beak. After pretraining with intermittent reinforcement of key pecking, baseline values of beak separation were determined during discrete-trial continuous reinforcement. A trial began when a key was lit red. The first peck to the red key was reinforced with food and turned the light off. After 2 s a new trial began, etc. Next, larger or smaller interbeak distances were differentially conditioned. For larger distances (up), the criterion for reinforcement was that the associated interbeak distance should be in the 80th percentile of the distribution of distances for the preceding session. Conversely, differential reinforcement of smaller distances (down) required that distances were in the 20th percentile of the distribution for the preceding session. Fig. 12B presents relative frequency distributions (percentages) of interbeak distance (gape) during initial baseline and for the final session of each response differentiation condition for one bird. Relative to baseline, differential reinforcement of beak separation produced clear shifts in the frequency distributions with enlargement of beak separation when the criterion went up and lessening of beak separation when the criterion went down. The frequency distributions for up and down were practically non overlapping. In addition, each differential reinforcement procedure induced interbeak distances that were absent during baseline. Almost all beak separations were larger under upward conditioning than under baseline, and more than 50% of the beak separations were smaller under downward conditioning than under baseline. Note that this induction of new interbeak distances is not necessary to produce reinforcement. Hence, once again, new response instances were generated outside of the criterion required for reinforcement (i.e., the functional operant is a wider class than the descriptive operant, Section 4.1).
The data demonstrate that even a subtle, small scale, motor movement, such as beak separation, can be brought under operant control by contingencies of reinforcement. This successful bidirectional differentiation (both up and down) of beak separation in the pigeon confirms that response differentiation as a process is general across different levels of analysis. An interesting parallel can be drawn between this differentiation of small-scale motor movement in pigeons and the differential conditioning of infinitesimal thumb contractions in humans (Hefferline and Keenan, 1963).
Entirely new combinations of response topographies, never seen before in a subject's response repertoire, can be generated through the systematic application of response differentiation techniques. Response differentiation is one of the most powerful methods Experimental Analysis of Behavior has to offer for it allows the experimenter to change the form of existing behavior (see Part 1, Ch. 2). The dynamic properties of operant behavior are probably defined and also restricted by the processes of response differentiation. Galbicka (1988), in a review of response differentiation, similarly argued that "response differentiation is an integral part of all operant conditioning. Response differentiation is operant conditioning, and vice versa" (p. 343).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444812513500138
russellhatemselithe.blogspot.com
Source: https://www.sciencedirect.com/topics/psychology/intermittent-reinforcement
0 Response to "Once This Kind of Continuous Reinforcement is Discontinued We Can Predic"
Post a Comment