Topographical Analysis: The Operationalization andQuantification of Target Behaviors and Contextual Variables 514 Identification of Functional Relationships and the Functional Analysis of
Trang 1population: Application of a 2-stage for case identification.
Archives of General Psychiatry, 54, 345–351.
Lesser, I M (1997) Cultural considerations using the Structured
Clinical Interview for DSM-III for Mood and Anxiety Disorders
assessment Journal of Psychopathology and Behavioral
Assess-ment, 19, 149–160.
Levitan, R D., Blouin, A G., Navarro, J R., & Hill, J (1991).
Validity of the computerized DIS for diagnosing psychiatric
patients Canadian Journal of Psychiatry, 36, 728–731.
Lewis, G (1994) Assessing psychiatric disorder with a human
interviewer or a computer Journal of Epidemiology and
Com-munity Health, 48, 207–210.
Lindsay, K A., Sankis, L M., & Widiger, T A (2000) Gender bias
in self-report personality disorder inventories Journal of
Per-sonality Disorders, 14, 218–232.
Livesley, W J (2001) Handbook of personality disorders: Theory,
research, and treatment New York: Guilford.
Locke, S D., & Gilbert, B O (1995) Method of psychological
assessment, self-disclosure, and experiential differences: A study
of computer, questionnaire, and interview assessment formats.
Journal of Social Behavior and Personality, 10, 255–263.
Loranger, A W., Susman, V L., Oldham, J M., & Russakoff, L M.
(1987) The personality disorder examination: A preliminary
report Journal of Personality Disorders, 1, 1–13.
Loranger, A W., Sartorius, N., Andreoli, A., Berger, P., Buchheim, P.,
Channabasavanna, S M., et al (1994) The International
Per-sonality Disorder Examination: The World Health Organization/
Alcohol, Drug Abuse, and Mental Health Administration
interna-tional pilot study of personality disorders Archives of General
Psychiatry, 51, 215–224.
Marlowe, D B., Husband, S D., Bonieskie, L K M., & Kirby,
K C (1997) Structured interview versus self-report tests
van-tages for the assessment of personality pathology in cocaine
dependence Journal of Personality Disorders, 11, 177–190.
Maser, J D., Kaelber, C., & Weise, R E (1991) International use
and attitudes towards DSM-III and DSM-III-R: Growing
consen-sus on psychiatric classification Journal of Abnormal
Psychol-ogy, 100, 271–279.
Matarazzo, J D (1965) The interview In B Wolman (Ed.),
Handbook of clinical psychology (pp 403–452) New York:
McGraw-Hill.
Matarazzo, J D (1978) The interview: Its reliability and validity in
psychiatric diagnosis In B Wolman (Ed.), Clinical diagnosis of
medical disorders (pp 47–96) New York: Plenum.
McWilliams, L A., & Asmund, G J (1999) Alcohol consumption
in university women: A second look at the role of anxiety
sensi-tivity Depression and Anxiety, 10, 125–128.
Menninger, K., Mayman, M., & Pruyser, P (1963) The vital
bal-ance: The life process in mental health and illness New York:
Viking.
Miller, W R (1991) Motivational interviewing: Preparing people
to change addictive behavior New York: Guilford.
Millon, T (1991) Classification in psychopathology: Rationale,
alternatives, and standards Journal of Abnormal Psychology,
Modestin, J., Enri, T., & Oberson, B (1998) A comparison of
self-report and interview diagnoses of DSM-III-R personality ders European Journal of Personality, 12, 445– 455.
disor-Neal, L A., Fox, C., Carroll, N., Holden, M., & Barnes, P (1997) Development and validation of a computerized screening test for
personality disorders in DSM-III-R Acta Psychiatrica danavia, 95, 351–356.
Scan-O’Boyle, M., & Self, D (1990) A comparison of two interviews for
DSM-III-R personality disorders Psychiatry Research, 32, 85–92.
Oldham, J M., & Skodol, A E (2000) Charting the future of
AXIS II Journal of Personality Disorders, 14, 17–29.
Oldham, J M., Skodol, A E., Kellman, H D., Hyler, E., Doidge, N., Rosnick, L., et al (1992) Comorbidity of Axis I and Axis II dis-
orders American Journal of Psychiatry, 152, 571–578.
Perry, J C (1992) Problems in the considerations in the valid
assessment of personality disorders American Journal of chiatry, 149, 1645–1653.
Psy-Peters, L., & Andrews, G (1995) Procedural validity of the puterized version of the Composite International Diagnostic
com-Interview (CIDI-Auto) in anxiety disorders Psychological icine, 25, 1269–1280.
Med-Pfohl, B., Blum, N., & Zimmerman, M (1995) Structured view for DSM-IV Personality SIDP-IV Iowa City: University of
Inter-Iowa.
Pfohl, B., Stangl, D., & Zimmerman, M (1983) Structured view for DSM-III-R Personality SIDP-R Iowa City: University
Inter-of Iowa College Inter-of Medicine.
Poling, J., Rounsaville, B J., Ball, S., Tennen, H., Krantzler, H R., & Triffleman, E (1999) Rates of personality disorders in substance
abusers: A comparison between DSM-III-R and DSM-IV Journal
of Personality Disorders, 13, 375–384.
Reynolds, W M (1990) Development of a semistructured clinical
interview for suicide behaviors in adolescents Psychological Assessment, 2, 382–390.
Robins, L N., Helzer, J E., Croughan, J., & Ratcliff, K S (1981) National Institute on Mental Health Diagnostic Interview Sched-
ule Archives of General Psychiatry, 38, 381–389.
Rosenman, S J., Levings, C T., & Korten, A E (1997) Clinical ity and patient acceptance of the computerized Composite
util-International Diagnostic Interview Psychiatric Services, 48, 815–
Trang 2References 507
Ross, H E., Swinson, R., Larkin, E J., & Doumani, S (1994).
Diagnosing comorbidity in substance abusers: Computer
assess-ment and clinical validation Journal of Nervous and Mental
Disease, 182, 556–563.
Ruskin, P E., Reed, S., Kumar, R., Kling, M A., Siegel-Eliot, R.,
Rosen, M R., et al (1998) Reliability and acceptability of
psy-chiatric diagnosis via telecommunication and audiovisual
tech-nology Psychiatric Services, 49, 1086 –1088.
Segal, D L., Hersen, M., & Van Hasselt, V B (1994) Reliability of
the Structured Clinical Interview for DSM-III-R: An evaluative
review Comprehensive Psychiatry, 35, 316–327.
Selzer, M A., Kernberg, P., Fibel, B., Cherbuliez, T., & Mortati, S.
(1987) The personality assessment interview Psychiatry, 50, 142–
153.
Shapiro, J P (1991) Interviewing children about psychological
issues associated with sexual abuse Psychotherapy, 28, 55–66.
Sloan, K A., Eldridge, K., & Evenson, R (1992) An automated
screening schedule for mental health centers Computers in
Human Services, 8, 55–61.
Sommers-Flanagan, J., & Sommers-Flanagan, R (1995) Intake
interviewing with suicidal patients: A systematic approach
Pro-fessional Psychology: Research and Practice, 26, 41–47.
Spitzer, R., & Williams, J B (1984) Structured clinical interview
for DSM-III disorders New York: Biometrics Research
Devel-opment, New York State Psychiatric Institute.
Spitzer, R L., & Williams, J B (1988) Revised diagnostic criteria
and a new structured interview for diagnosing anxiety disorders.
Journal of Psychiatric Research, 22(Supp 1), 55–85.
Spitzer, R., Williams, J B., Gibbon, M., & First, M B (1992)
Struc-tured Clinical Interview for DSM-III-R (SCID-II ) Washington,
DC: American Psychiatric Association.
Spitzer, R L., Williams, J B., & Skodol, A E (1980) DSM-III: The
major achievements and an overview American Journal of
Psychiatry, 137, 151–164.
Stangl, D., Pfohl, B., Zimmerman, M., Bowers, W., & Corenthal, M.
(1985) A structured interview for the DSM-III personality
disor-ders: A preliminary report Archives of General Psychiatry, 42,
591–596.
Steinberg, M., Cicchetti, D., Buchanan, J., & Hall, P (1993)
Clini-cal assessment of dissociative symptoms and disorders: The
Structured Clinical Interview for DSM-IV Dissociative
Disor-ders Dissociation: Progress-in-the-Dissociative-Disorders, 6,
3–15.
Strack, S., & Lorr, M (1997) Invited essay: The challenge of
dif-ferentiating normal and disordered personality Journal of
Per-sonality Disorders, 11, 105–122.
Sullivan, H S (1954) The psychiatric interview New York: W W.
Norton.
Sweeney, M., McGrath, R E., Leigh, E., & Costa, G (2001,
March) Computer-assisted interviews: A meta-analysis of
pa-tient acceptance Paper presented at the annual meeting of the
Society for Personality Assessment Philadelphia.
Ventura, J., Liberman, R P., Green, M F., Shaner, A., & Mintz, J (1998) Training and quality assurance with Structured Clinical
Interview for DSM-IV (SCID-I/P) Psychiatry Research, 79, 163–
Watson, C G., Juba, M P., Manifold, V., Kucala, T., & Anderson,
P E (1991) The PTSD interview: Rationale, description,
relia-bility and concurrent validity of a DSM-III-based technique Journal of Clinical Psychology, 47, 179–188.
Weiner, I (1999) What the Rorschach can do for you: Incremental
validity in clinical application Assessment, 6, 327–340.
Weiss, R D., Najavits, L M., Muenz, L R., & Hufford, C (1995) 12-month test-retest reliability of the Structured Clinical Inter-
view for DSM-III-R personality disorders in cocaine-dependent patients Comprehensive Psychiatry, 36, 384–389.
Westen, D (1997) Differences between clinical and research ods for assessing personality disorders: Implication for research
meth-and the evaluation of Axis II American Journal of Psychiatry,
154, 895–903.
Westen, D., & Shedler, J (2000) A prototype matching approach
to diagnosing personality disorders: Toward DSM-V Journal of Personality Disorders, 14, 109–126.
Widiger, T A (1992) Categorical versus dimensional classification:
Implications from and for research Journal of Personality orders, 6, 287–300.
Dis-Widiger, T A (2000) Personality disorders in the 21st century.
Journal of Personality Disorders, 14, 3–16.
Widiger, T A., & Frances, A (1985a) Axis II personality disorders:
Diagnostic and treatment issues Hospital and Community chiatry, 36, 619–627.
Psy-Widiger, T A., & Frances, A (1985b) The DSM-III personality orders: Perspectives from psychology Archives of General Psy- chiatry, 42, 615–623.
dis-Widiger, T A., & Frances, A (1987) Interviews and inventories for
the measurement of personality disorders Clinical Psychology Review, 7, 49–75.
Widiger, T A., Frances, A., Spitzer, R L., & Williams, J B (1988).
The DSM-III-R personality disorders: An overview American Journal of Psychiatry, 145, 786–795.
Widiger, T A., & Kelso, K (1983) Psychodiagnosis of Axis II.
Clinical Psychology Review, 3, 491–510.
Widiger, T., Mangine, S., Corbitt, E M., Ellis, C G., & Thomas,
G V (1995) Personality Disorder Interview–IV: A structured interview for the assessment of personality disorders.
semi-Odessa, FL: Psychological Assessment Resources.
Wiens, A N., & Matarazzo, J D (1983) Diagnostic interviewing.
In M Hersen, A Kazdin, & A Bellak (Eds.), The clinical chology handbook (pp 309–328) New York: Pergamon.
Trang 3psy-Williams, J B., Spitzer, R L., & Gibbon, M (1992) International
reliability of a diagnostic intake procedure for panic disorder.
American Journal of Psychiatry, 149, 560–562.
World Health Organization (1992) The ICD-10 classification of
mental and behavioral disorders Geneva, Switzerland: Author.
Zanarini, M C., Frankenburg, F R., Chauncey, D L., & Gunderson,
J G (1987) The Diagnostic Interview for Personality Disorders:
Inter-rater and test-retest reliability Comprehensive Psychiatry,
28, 467–480.
Zanarini, M., Frankenburg, F R., Sickel, A E., & Yong, L (1995).
Diagnostic Interview for DSM-IV Personality Disorders.
Cambridge, MA: Harvard University.
Zanarini, M., Gunderson, J., Frankenburg, F R., & Chauncey, D L.
(1989) The revised Diagnostic Interview for Borderlines:
Dis-criminating borderline personality disorders from other Axis II
disorders Journal of Personality Disorders, 3, 10 –18.
Zilboorg, G (1941) A history of medical psychology New York:
Norton.
Zimmerman, M (1994) Diagnosing personality disorders: A review
of issues and research methods Archives of General Psychiatry,
51, 225–245.
Zimmerman, M., & Mattia, J I (1998) Body dysmorphic disorder
in psychiatric outpatients: recognition, prevalence, comorbidity,
demographic, and clinical correlates Comprehensive Psychiatry,
39, 265–270.
Zimmerman, M., & Mattia, J I (1999a) Psychiatric diagnosis in
clinical practice: Is comorbidity being missed? Comprehensive Psychiatry, 40, 182–191.
Zimmerman, M., & Mattia, J I (1999b) Is posttraumatic stress
disorder underdiagnosed in routine clinical practice? Journal of Nervous and Mental Disease, 187, 420–428.
Zimmerman, M., & Mattia, J I (1999c) Differences between cal and research practices in diagnosing borderline personality
clini-disorder American Journal of Psychiatry, 156, 1570–1574.
Trang 4Topographical Analysis: The Operationalization and
Quantification of Target Behaviors and
Contextual Variables 514
Identification of Functional Relationships and the
Functional Analysis of Behavior 515
BEHAVIORAL ASSESSMENT METHODS: SAMPLING, DATA COLLECTION, AND DATA
EVALUATION TECHNIQUES 517
Sampling 518 Assessment Methods 519 Methods Used to Identify Causal Functional Relationships 521 Methods Used to Estimate the Magnitude of Causal Functional Relationships 523
SUMMARY AND CONCLUSIONS 525 REFERENCES 526
Imagine the following: You are intensely worried You cannot
sleep well, you feel fatigued, and you have a near-constant
hollow feeling in the pit of your stomach At the moment, you
are convinced that you have cancer because a cough has
per-sisted for several days You’ve been touching your chest,
tak-ing test breaths in order to determine whether there is some
abnormality in your lungs Although you would like to
sched-ule an appointment with your physician, you’ve avoided
making the call because you feel certain that either the news
will be grim or he will dismiss your concerns as irrational In
an effort to combat your worries about the cancer, you’ve
been repeatedly telling yourself that you’re probably fine,
given your health habits and medical history You also know
that on many previous occasions, you developed intense
wor-ries about health, finances, and career that eventually turned
out to be false alarms
This pattern of repeatedly developing intense and
irra-tional fears is creating a new and disturbing feeling of
de-pressed mood as you realize that you have been consumed by
worry about one thing or another for much of your adult life
Furthermore, between the major episodes of worry, there are
only fleeting moments of relief At times, you wonder
whether you will ever escape from the worry Your friends
have noticed a change in your behavior, and you have become
increasingly withdrawn Work performance is declining, and
you are certain that you will be fired if you do not improvesoon You feel that you must act to seek professional help, andyou have asked some close friends about therapists No onehas any strong recommendations, but you have learned of afew possible professionals You scan the telephone book,eventually settle on a therapist, and after several rehearsals ofwhat you will say, you pick up the phone
Now, consider the following: If you were to contact acognitive-behaviorally oriented therapist, what assessmentmethods would be used to evaluate your condition? Whatmodel of behavior problems would be used to guide the focus
of assessment, and how would this model differ from onesgenerated by nonbehavioral therapists? What methods would
be used to assess your difficulties? What sort of informationwould be yielded by these methods? How would the therapistevaluate the information, and how valid would his or her con-clusions be? How would the information be used?
These and other important questions related to behavioralassessment are discussed in this chapter Rather than empha-size applications of behavioral assessment to research ques-tions and formal hypotheses testing, we concentrate on howbehavioral assessment methods are operationalized and exe-cuted in typical clinical settings The initial section of thischapter examines the conceptual foundations of behavioralassessment and how these foundations differ from other
Trang 5approaches to assessment Then we present information
about the extent to which behavioral assessment methods are
being used by behavior therapists and in treatment-outcome
studies Specific procedures used in behavioral assessment
are described next; here, our emphasis is on reviewing
bene-fits and limitations of particular assessment strategies and
data evaluation approaches Finally, the ways in which
as-sessment information can be organized and integrated into a
comprehensive clinical model known as the functional
analy-sis are presented
CONCEPTUAL FOUNDATIONS OF
BEHAVIORAL ASSESSMENT
Two fundamental assumptions underlie behavioral
assess-ment and differentiate it from other theoretical approaches
One of these assumptions is environmental determinism This
assumption states that behavior is functional—it is emitted in
response to changing environmental events (Grant & Evans,
1994; S C Hayes & Toarmino, 1999; O’Donahue, 1998;
Pierce, 1999; Shapiro & Kratochwill, 1988) It is further
assumed that learning principles provide a sound conceptual
framework for understanding these behavior-environment
relationships Thus, in behavioral assessment, problem
be-haviors are interpreted as coherent responses to
environmen-tal events that precede, co-occur, or follow the behaviors’
occurrence The measurement of behavior without
simulta-neous evaluation of critical environmental events would be
anathema
A second key assumption of the behavioral paradigm is
that behavior can be most effectively understood when
as-sessment procedures adhere to an empirical approach Thus,
behavioral assessment methods are often designed to yield
quantitative measures of minimally inferential and precisely
defined behaviors, environmental events, and the
relation-ships among them (Haynes & O’Brien, 2000) The empirical
assumption underlies the tendency for behavior therapists to
prefer the use of measurement procedures that rely on
sys-tematic observation (e.g., Barlow & Hersen, 1984; Cone,
1988; Goldfried & Kent, 1972) It also underlies the strong
endorsement of empirical validation as the most appropriate
means of evaluating the efficacy and effectiveness of
inter-ventions (Nathan & Gorman, 1998)
Emerging out of environmental determinism and
empiri-cism are a number of corollary assumptions about behavior
and the most effective ways to evaluate it These additional
as-sumptions characterize the evolution of thought in behavioral
assessment and its openness to change, given emerging trends
in learning theory, behavioral research, and psychometrics
(Haynes & O’Brien, 2000) The first of these corollary
as-sumptions is an endorsement of the position that
hypothetico-deductive methods of inquiry are the preferred strategy for
identifying the causes and correlates of problem behavior.Using this method of scientific inquiry, a behavior therapistwill often design an assessment strategy whereby client be-havior is measured under different conditions so that one ormore hypotheses about its function can be tested Two excel-lent examples of this methodology are the functional analyticexperimental procedures developed by Iwata and colleaguesfor the assessment and treatment of self-injurious behavior(Iwata et al., 1994) and the functional analytic psychotherapyapproach developed by Kohlenberg for assessment and treat-ment of adult psychological disorders such as borderline spec-trum behaviors (Kohlenberg & Tsai, 1991)
A second corollary assumption, contextualism, asserts that
the cause-effect relationships between environmental eventsand behavior are often mediated by individual differences(e.g., Dougher, 2000; Evans, 1985; Hawkins, 1986; Russo &Budd, 1987) This assumption supports the expectation thatbehaviors can vary greatly according to the many unique in-teractions that can occur among individual characteristicsand contextual events (Wahler & Fox, 1981) Thus, in con-temporary behavioral assessment approaches, the therapistmay be apt to measure individual difference variables (e.g.,physiological activation patterns, self-statements) in order toevaluate how these variables may be interacting with envi-ronmental events
A third corollary assumption is behavioral plasticity
(O’Brien & Haynes, 1995) This assumption is represented
in the behavioral assessment position that many problem haviors that were historically viewed as untreatable (e.g.,psychotic behavior, aggressive behavior among individualswith developmental disabilities, psychophysiological disor-ders) can be changed if the correct configuration of learningprinciples and environmental events is built into an interven-tion and applied consistently This assumption supports per-sistence and optimism with difficult-to-treat problems It mayalso underlie the willingness of behavior therapists to workwith clients who are eschewed by nonbehavioral practi-tioners because they were historically deemed untreatable(e.g., persons with mental retardation, schizophrenia, autism,psychosis)
be-A fourth assumption, multivariate multidimensionalism,
posits that problem behaviors and environmental events areoften molar constructs that are comprised of many specificand qualitatively distinct modes of responding and dimen-sions by which they can be measured Thus, there are manyways in which a single behavior, environmental event, or bothcan be operationalized The multidimensional assumption is
Trang 6Conceptual Foundations of Behavioral Assessment 511
reflected in an endorsement of multimethod and multifaceted
assessment strategies (Cone, 1988; Haynes, 2000; Morris,
1988)
Reciprocal causation is a fifth assumption that
character-izes behavioral assessment The essential position articulated
in reciprocal causation is that situational events that influence
a problem behavior can in turn be affected by that same
be-havior (Bandura, 1981) An example of reciprocal causation
can be found in patterns of behavior observed among persons
with headaches Specifically, the headache patient may
ver-balize headache complaints, solicit behaviors from a spouse,
and exhibit headache behaviors such as pained facial
expres-sions These pain behaviors may then evoke supportive or
helping responses from a spouse (e.g., turning down the
radio, darkening the room, providing medications, offering
consolation) In turn, the supportive behavior provided by the
spouse may act as a reinforcer and increase the likelihood
that the pain behaviors will be expressed in the future Hence,
the pain behaviors may trigger reinforcing consequences,
and the reinforcing consequences may then act as an
impor-tant determinant of future pain behavior (O’Brien & Haynes,
1995)
A sixth assumption, temporal variability, is that
relation-ships among causal events and problem behaviors often
change over time (Haynes, 1992) Consequently, it is
possi-ble that the initiating cause of a propossi-blem behavior differs
from the factors maintaining the behavior after it is
estab-lished Health promotion behaviors illustrate this point
Specifically, factors that promote the initiation of a
preven-tive health regimen (e.g., cues, perceptions of susceptibility)
may be quite different from factors that support the
mainte-nance of the behavior (Prochaska, 1994)
The aforementioned conceptual foundations have a
num-ber of implications for therapists who use behavioral
assess-ment techniques First, it is imperative that persons who
endorse a behavioral approach to assessment be familiar with
learning principles and how these principles apply to
behav-ior problems observed in clinical settings Familiarity with
learning principles in turn permit the behavior therapist to
better understand complex and clinically relevant
context-behavior processes that govern environmental determinism
For example, we have noted how virtually any graduate
stu-dent or behavior therapist can describe classical conditioning
as it applies to dogs salivating in response to a bell that was
previously paired with meat powder or how Little Albert
de-veloped a rabbit phobia These same persons, however, often
have difficulty describing how anticipatory nausea and
vom-iting in cancer patients, cardiovascular hyperreactivity to
stress, social phobia, and panic attacks may arise from
classi-cal conditioning Similarly, most clinicians can describe how
operant conditioning may affect the behavior of rats and geons under various conditions of antecedent and consequen-tial stimuli However, they often have a limited capacity forapplying these principles to important clinical phenomenasuch as client resistance to therapy directives, client transfer-ence, therapist countertransference, and how various therapytechniques (e.g., cognitive restructuring, graded exposurewith response prevention) promote behavior change
pi-In addition to being well-versed in learning theory, ior therapists must also learn to carefully operationalize con-structs so that unambiguous measures of problem behavior can
behav-be either created or appropriately selected from the corpus ofmeasures that have been developed by other researchers Thistask requires a deliberate and scholarly approach to assess-ment as well as facility with research methods aimed at con-struct development and measurement (cf Cook & Campbell,1979; Kazdin, 1998) Finally, behavior therapists must knowhow to create and implement assessment methods that permitreasonable identification and measurement of complex rela-tionships among behaviors and contextual variables
Imagine once again that you are the client described in thebeginning of the chapter The assumptions guiding the be-havior therapist’s assessment would affect his or her model ofthe your problem behavior and the selection of assessmentmethods Specifically, guided by the empirical and multivari-ate assumptions, the behavior therapist would be apt to usemethods that promote the development of unambiguous mea-sures of the problem behavior Thus, he or she would workwith you to develop clear descriptions of the key presentingproblems (insomnia, fatigue, a feeling in the pit of the stom-ach, chronic worry, touching chest and taking test breaths,negative expectations about prognosis, use of reassuring self-statements) Furthermore, guided by environmental deter-minism and contextualism, the behavior therapist wouldencourage you to identify specific persons, places, times, andprior learning experiences that may account for variation in
problem behavior (e.g., do the various problem behaviors
differ when you are alone relative to when you are with ers, is your worry greater at work versus home, etc.) Finally,
oth-guided by assumptions regarding reciprocal causation andtemporal variability, the behavior therapist would allow forthe possibility that the factors controlling your problem be-haviors at the present time may be different from initiatingfactors Thus, although it may be the case that your worrieswere initiated by a persistent cough, the maintenance of theworry may be related to a number of current causal factorssuch as your negative expectations about cancer progno-sis and your efforts to allay worry by using checking behav-iors (e.g., test breaths, chest touching) and avoidance (notobtaining a medical evaluation)
Trang 7In the following sections, we review procedures used by
behavioral assessors to operationalize, measure, and evaluate
problem behavior and situational events As part of the
re-view, we highlight research findings and decisional processes
that guide the enactment of these procedures Prior to
pre-senting this information, however, we summarize the current
status of behavioral assessment in clinical settings and
research applications
CURRENT STATUS AND APPLICATIONS OF
BEHAVIORAL ASSESSMENT
One indicator of the status and utility of an assessment
method is the extent to which it is used among practitioners
and researchers Frequency of use among practitioners and
researchers represents a combination of influences,
includ-ing the traininclud-ing background of the practitioner, the
treat-ment-utility of information provided by the method (i.e., the
extent to which information can guide treatment formulation
and implementation), and the extent to which the method
conforms to the demands of a contemporary clinical
set-tings Frequency of use also represents the extent to which
the method yields information that is reliable, valid, and
sensitive to variation in contextual factors (e.g., treatment
effects, variation in contextual factors, and experimental
manipulations)
An examination of the behavioral assessment practices of
behaviorally oriented clinicians was conducted to determine
their status and utility among those who endorse a
cognitive-behavioral perspective Five hundred members of the
Associ-ation for Advancement of Behavior Therapy (AABT) were
surveyed (Mettee-Carter et al., 1999) The survey contained a
number of items that were used in prior investigations of
as-sessment practices (Elliott, Miltenberger, Kastar-Bundgaard,
& Lumley, 1996; Swan & MacDonald, 1978) Several
addi-tional items were included so that we could learn about
strate-gies used to evaluate assessment data and the accuracy of
these data analytic techniques The results of the survey
re-garding assessment practices are presented in this section
Survey results that pertain to the accuracy of data
evalua-tion techniques are presented later in this chapter in the
sec-tion addressing methods used to evaluate assessment data
A total of 156 completed surveys were returned by
re-spondents (31%) This response rate was comparable to that
obtained by Elliott et al (1996), who reported that 334 of 964
(35%) surveys were returned in their study The majority of
respondents (91%) held a PhD in psychology, with 4%
re-porting master’s level training, 2% rere-porting attainment of a
medical degree, and 1% reporting PsyD training A large
proportion of respondents reported that they were engaged inclinical practice in either a private setting (40%), medicalcenter or medical school (16%), or hospital (9%) Thirtypercent reported their primary employment setting was anacademic department
As would be expected, most respondents reported theirprimary orientation to assessment was cognitive-behavioral(73%) Less frequently endorsed orientations included ap-plied behavior analysis (10%) and social learning (8%) Re-gardless of orientation, behavioral assessment was reported
to be very important in treatment formulation (mean rating ofimportance = 5.93, SD = 1.17, on a Likert scale that ranged
from 1= not at all important to 7 = extremely important).
Furthermore, they reported that they typically devoted foursessions to develop an adequate conceptualization of aclient’s problem behavior and the factors that control it.The more commonly reported assessment methods used bybehavior therapists in this study are summarized in Table 22.1.For comparison purposes, we included data reported by Elliot
et al (1996), who presented results separately for academicpsychologists and practitioners As is readily evident inTable 22.1, our data are quite similar to those reported byElliott et al Additionally, like Elliott et al., we observed thatinterviewing (with the client, a significant other, or anotherprofessional) is clearly the most commonly used assessmentmethod The administration of self-report inventories is thenext most commonly used assessment method, followed bybehavioral observation and self-monitoring It is important tonote that these latter two methods are more uniquely alignedwith a behavioral orientation to assessment than are inter-viewing and questionnaire administration
TABLE 22.1 Results of 1998 Survey Investigating Assessment Methods Used by Members of the Association for the Advancement
of Behavior Therapy
Percent of Clients Assessed with this Method Assessment Method Current Study Elliot et al (1996)
questionnaires
Trang 8Goals and Applications of Behavioral Assessment 513
In order to evaluate the extent to which the various
assess-ment methods were associated with assessassess-ment orientation,
we regressed values from an item that assessed self-reported
degree of behavioral orientation (rated on a 7-point Likert
scale) onto the 13 assessment method items Results indicated
that use of analog functional analysis (= 23, t = 2.8,
p< 01), interviewing with client ( = – 22, t = – 2.75,
p<.01), and projective testing (= – 18, t = 2.21, p < 05)
accounted for significant proportions of variance in the degree
of behavioral orientation rating The direction of association
in this analysis indicated that persons who described
them-selves as more behaviorally oriented were more likely to use
analog functional analysis as an assessment method and less
likely to use interviewing and projective assessment methods
In addition to the methods reported by therapists in
sur-veys, another indicator of status and applicability of
behav-ioral assessment is in clinical research Haynes and O’Brien
(2000) evaluated data on the types of assessment methods
used in treatment outcome studies published in the Journal
of Clinical and Consulting Psychology (JCCP) from 1968
through 1996 JCCP was chosen because it is a highly
selec-tive, nonspecialty journal that publishes state-of-the-art
re-search in clinical psychology Articles published in 2000
were added to these data; the results are summarized in
Table 22.2
Table 22.2 illustrates several important points about the
relative status and applicability of behavioral assessment
First, it is apparent that self-report questionnaire
administra-tion has grown to be the dominant assessment method
Although it is not specifically reflected in the table, most of
these questionnaires used in these treatment outcome studies
assessed specific problem behaviors rather than broad
per-sonality constructs Thus, their use is quite consistent with
the behavioral approach to assessment, which supports the
use of focused and carefully designed indicators of problem
behavior Second, the prototypical behavioral assessment
methods—behavioral observation and self-monitoring—are
maintaining their status as useful measures for evaluatingtreatment outcomes, and psychophysiological measurementappears to be increasingly used
Returning once again to your experiences as the thetical client with chronic worries, we would argue that inaddition to encountering a behavior therapist who tends to en-dorse certain assumptions regarding behavior and who wouldseek careful operationalization of behavior and contexts,you would also be evaluated using a number of methods, in-cluding a clinical interview, questionnaire administration,self-monitoring, and direct observation Alternatively, it isunlikely that you would undergo projective testing or com-plete a personality inventory
hypo-GOALS AND APPLICATIONS OF BEHAVIORAL ASSESSMENT
The primary goal of behavioral assessment is to improveclinical decision making by obtaining reliable and validinformation about the nature of problem behavior and thefactors that control it (Haynes, 2000) This primary goal isrealized through two broad classes of subordinate goals ofbehavioral assessment: (a) to objectively measure behaviorand (b) to identify and evaluate relationships among problembehaviors and causal factors In turn, when these subordinategoals are realized, the behavior therapist is better able tomake valid decisions regarding treatment design, treatmentselection, treatment outcome evaluation, treatment processevaluation, and identification of factors that mediate response
to treatment (Haynes & O’Brien, 2000)
To attain the two subordinate goals, a behavior therapistmust generate detailed operational definitions of problembehaviors and potential causal factors After this step, strate-gies for collecting empirical data about relationships amongproblem behaviors and casual factors must be developed andenacted Finally, after data collection, proper evaluation
TABLE 22.2 Assessment Methods Used in Treatment Outcome Studies Published in the Journal of Consulting and Clinical Psychology
Trang 9procedures must be used to quantify the magnitude of causal
effects In the following sections, the assessment processes and
the decisions associated with these processes are reviewed
Topographical Analysis: The Operationalization
and Quantification of Target Behaviors and
Contextual Variables
Target Behavior Operationalization and Quantification
In consonance with the empirical assumption, an important
goal of behavioral assessment is to accurately characterize
problem behaviors To accomplish this goal, the behavior
therapist must initially determine which behaviors emitted by
the client are to be the focus of the assessment and
subse-quent intervention These selected behaviors are commonly
referred to as target behaviors.
After a target behavior has been identified, the behavior
therapist must determine what constitutes the essential
char-acteristics of the behavior Operational definitions are used to
capture the precise, unambiguous, and observable qualities of
the target behavior When developing an operational
defini-tion, the clinician often strives to maximize content validity
(i.e., the extent to which the operational definition captures
the essential elements of the target behavior), and—consistent
with the multidimensional assumption—it is accepted that a
client’s problem behavior will need to be operationalized in a
number of different ways
In order to simplify the operationalization decisions,
behavioral assessment writers have recommended that
complex behaviors be partitioned into at least three
inter-related modes of responding: verbal-cognitive behaviors,
physiological-affective behaviors, and overt-motor behaviors
(cf Hollandsworth, 1986; Spiegler & Guevremont, 1998)
The verbal-cognitive mode subsumes spoken words as well
as cognitive experiences such as self-statements, images,
irrational beliefs, attitudes, and the like The
physiological-affective mode subsumes physiological responses, physical
sensations, and felt emotional states Finally, the overt-motor
mode subsumes observable responses that represent
skeletal-nervous system activation and are typically under voluntary
control
The process of operationally defining a target behavior can
be deceptively complex For example, a client who reports that
she is depressed may be presenting with myriad of cognitive,
emotional, and overt-motor behaviors, including negative
ex-pectancies for the future, persistent thoughts of guilt and
pun-ishment, anhedonia, fatigue, sadness, social withdrawal, and
slowed motor movements However, another client who
re-ports that he is depressed may present with a very different
configuration of verbal-cognitive, physiological-affective,and overt-motor behaviors It is important to note that thesedifferent modes of responding that are all subsumed withinthe construct of depression may be differentially responsive tointervention techniques Thus, if the assessor measures a veryrestricted number of response modes (e.g., a measure only offeeling states), the validity of critical decisions about interven-tion design, intervention evaluation, and intervention processevaluation may be adversely affected
After a target behavior has been operationalized in terms
of modes, appropriate measurement dimensions must be lected The most commonly used measurement dimensionsused in clinical settings are frequency, duration, and intensity.Frequency refers to how often the behavior occurs across agiven time frame (e.g., number per day, per hour, per minute).Duration provides information about the amount of time thatelapses between behavior initiation and completion Intensityprovides information about the force or salience of the be-havior in relation to other responses emitted by the client.Although all of the aforementioned modes and dimen-sions of behavior can be operationalized and incorporatedinto an assessment, varying combinations will be evaluated
se-in any given case For example, Durand and Carr (1991)evaluated three children who were referred for assessmentand treatment of self-injurious and disruptive behaviors.Their operationalization was limited to frequency counts ofovert-motor responses Similarly, Miller’s (1991) topograph-ical description of a veteran with posttraumatic stress disor-der and an airplane phobia quantified self-reported anxiety,
an affective-physiological response, using only a measure ofintensity (i.e., subjective units of distress) In contrast, Levey,Aldaz, Watts, and Coyle (1991) generated a more compre-hensive topographical description of a client with sleep onsetand maintenance problems Their topographical analysis em-phasized the temporal characteristics (frequency and duration
of nighttime awakenings, rate of change from an awake state
to sleep, and interresponse time—the time that elapsed tween awakenings) and variability (variation in sleep onsetlatencies) of overt-motor (e.g., physical activity), affective-physiological (e.g., subjective distress), and cognitive-verbal(i.e., uncontrollable presleep cognitions) target behaviors
be-Contextual Variable Operationalization and Quantification
After operationally defining target behaviors, the behaviortherapist needs to construct operational definitions of key con-textual variables Contextual variables are environmentalevents and characteristics of the person that surround the targetbehavior and exert nontrivial effects upon it Contextual fac-tors can be sorted into two broad modes: social-environmental
Trang 10Goals and Applications of Behavioral Assessment 515
factors and intrapersonal factors (O’Brien & Haynes, 1997)
Social-environmental factors subsume interactions with other
people or groups of people as well as the physical
characteris-tics of an environment such as temperature, noise levels,
light-ing levels, food, and room design Intrapersonal factors
include verbal-cognitive, affective-physiological, and
overt-motor behaviors that may exert significant effects on the target
behavior
The contextual factor measurement dimensions are similar
to those used with target behaviors Specifically, frequency,
duration, and intensity of contextual factor occurrence are
most often measured For example, the intensity and duration
of exposure to adult attention, demanding tasks, or both can
be reliably measured and has been shown to have a significant
impact on the frequency and magnitude of self-injurious
be-havior among some clients (Derby et al., 1992; Durand &
Carr, 1991; Durand & Crimmins, 1988; Taylor & Carr,
1992a, 1992b) Similarly, the magnitude, frequency, and
duration of exposure to hospital cues among chemotherapy
patients with anticipatory nausea and vomiting have been
shown to exert a significant impact on symptom severity
(Burish, Carey, Krozely, & Greco, 1987; Carey & Burish,
1988)
In summary, careful operationalization of behavior and
contextual variables is one of the primary goals of behavioral
assessment Target behaviors are typically partitioned into
modes, and within each mode, several dimensions of
measure-ment may be used Similarly, contextual variables can be
par-titioned into types and dimensions Applied to the hypothetical
client with chronic worry regarding cancer, we can develop a
preliminary topographical analysis Specifically, negative
expectations about prognosis, disturbing mental images, and
reassuring self-statements would fall into the cognitive-verbal
mode of responding The affective-physiological mode would
subsume feelings of fatigue, sleeplessness, sad mood, the
sen-sation in the pit of your stomach, and specific physical
symp-toms associated with worry (e.g., increased heart rate,
trembling, muscle tension, etc.) Finally, the overt-motor mode
would include social withdrawal, checking behaviors, and
avoidance behaviors Each of the behaviors could also be
measured along a number of different dimensions such as
frequency, intensity (e.g., degree of belief in negative or
reas-suring self-statements, vividness of mental images, degree of
heart rate elevation), duration, or any combination of these
The contextual variables could also be identified and
op-erationalized for this case Specifically, the behavior therapist
would seek to identify important social-environmental and
interpersonal variables that may plausibly promote changes
in target behavior occurrence For example, what is the
na-ture of current family and work environments, and have there
been substantial changes in them (e.g., have increased sors been experienced)? What sorts of social and situationalcontexts are associated with target behavior intensificationand target behavior improvement?
stres-Applications of the Topographical Analysis of Behavior and Contexts
The operationalization and quantification of target behaviorand contextual factors can serve important functions in be-havioral assessment First, operational definitions can helpthe client and the behavior therapist think carefully and ob-jectively about the nature of the target behaviors and the con-texts within which they occur This type of consideration canguard against oversimplified, biased, and nonscientific de-scriptions of target behaviors and settings Second, opera-tional definitions and quantification allow the clinician toevaluate the social significance of the target behavior or thestimulus characteristics of a particular context relative to rel-evant comparison groups or comparison contexts Finally,operationalization of target behaviors is a critical step in de-termining whether behavioral criteria are met for establishing
a psychiatric diagnosis using the Diagnostic and Statistical
Manual of Mental Disorders–Fourth Edition (DSM-IV;
American Psychiatric Association, 1994) or the ninth edition
of the International Classification of Diseases (ICD-9;
American Medical Association) This latter process of dering a diagnosis is not without controversy in the behav-ioral assessment literature However, it is the case that withthe increasing development of effective diagnosis-specifictreatment protocols, the rendering of a diagnosis can be a crit-ical element of pretreatment assessment and intervention de-sign For example, the pattern of behaviors experienced bythe hypothetical client with cancer worries would conform to
ren-a diren-agnosis of generren-alized ren-anxiety disorder, ren-and it would bereasonable to use the empirically supported treatment proto-col for this disorder that was developed by Craske, Barlow,and O’Leary (1992)
Identification of Functional Relationships and the Functional Analysis of Behavior
After target behaviors and contextual factors have been fied and operationalized, the therapist will often wish todevelop a model of the relationships among these variables.This model of causal variable-target behavior interrela-tionships is the functional analysis As is apparent in thepreceding discussion of target and causal variable operational-ization, a wide range of variables will need to be incorporatedinto any reasonably complete functional analysis As a result,
Trang 11identi-behavior therapists must make important decisions regarding
(a) how complex assessment data can be analyzed so that
rela-tionships among target behaviors and casual factors can be
estimated, and (b) how the resultant information can be
orga-nized into a coherent model that in turn will guide treatment
formulation and evaluation
Defining the Functional Analysis
The term functional analysis has appeared in many research
publications, and many behavioral assessment experts have
argued that the functional analysis is the core research
method-ology in behaviorism (cf Follette, Naugle, & Linnerooth, 2000;
O’Neill, Horner, Albin, Storey, & Sprague, 1990; Sturmey,
1996) In terms of clinical utility, a number of authors have
argued that an incorrect or incomplete functional analysis can
produce ineffective behavioral interventions (e.g., Axelrod,
1987; Evans, 1985; S L Hayes, Nelson, & Jarret, 1987;
Haynes & O’Brien, 1990, 2000; Iwata, Kahng, Wallace, &
Lindberg, 2000; Nelson & Hayes, 1986)
Despite the fact that the functional analysis is considered to
be a critical component of assessment, the term has been used
to characterize a diverse set of clinical activities, including
(a) the operationalization of target behavior (e.g., Bernstein,
Borkovec, & Coles, 1986; Craighead, Kazdin, & Mahoney,
1981), (b) the operationalization of situational factors (Derby
et al., 1992; Taylor & Carr, 1992a, 1992b), (c) single subject
experimental procedures where hypothesized causal variables
are systematically manipulated while measures of target
be-havior are collected (e.g., Peterson, Homer, & Wonderlich,
1982; Smith, Iwata, Vollmer, & Pace, 1992), (d)
measure-ment of stimulus-response or response-response relationships
(Hawkins, 1986), (e) assessment of motivational states (Kanfer
& Phillips, 1970), and (f) an overall integration of
operational-ized target behaviors and controlling factors (Correa & Sutker,
l986; S C Hayes & Follette, 1992; Nelson, 1988) Because of
the ambiguity surrounding the term, we proposed that the
func-tional analysis be defined as “the identification of important,
controllable, causal functional relationships applicable to a
specified set of target behaviors for an individual client”
(Haynes & O’Brien, 1990, p 654)
This definition of functional analysis has several
impor-tant characteristics First, it is imporimpor-tant to note that taken
alone, a functional relationship only implies that the
relation-ship between two variables can be adequately represented
by a mathematical formula (Blalock, 1969; Haynes, 1992;
James, Mulaik, & Brett, 1982) In behavioral assessment, the
presence of a functional relationship is typically supported
by the observation of covariation among variables Some of
these functional relationships represent a causal process,
whereas others do not Because information about causality
is most relevant to treatment design and evaluation, the
func-tional analysis should be designed to assess causal funcfunc-tional
relationships.
Many variables can exert causal effects on a particular get behavior Consequently, the behavior therapist must de-cide which subset of causal functional relationships are mostrelevant for treatment design Two criteria that are used toisolate this subset of relationships are the concept of sharedvariance and modifiability Thus, our definition of the func-tional analysis specifies that there is a focus on identifying
tar-and evaluating important tar-and controllable causal functional
relationships
Another important characteristic of the functional analysis
is its idiographic emphasis—that is, it is postulated that hanced understanding of target behavior and casual factorinteractions will be found when the functional analysis em-
en-phasizes evaluation of specific target behaviors for an
indi-vidual client This idiographic emphasis is consistent with
the behavioral principles of environmental determinism andcontextualism
Finally, it is important to note that the functional analysis
is undefined in relation to methodology, types of variables to
be quantified, and number of functional relationships to beevaluated Given the complexity of causal models of problembehavior, it is important that behavior therapists employ di-verse assessment methodologies that measure multiple modesand dimensions of behavior and contexts
Reducing Complexity of Generating a Functional Analysis: The Role of Presuppositions
Given that there are at least three modes of responding and twobroad modes of contextual variables, a single target behaviorcould have six combinations of interactions among target be-havior modes and contextual factor modes (see Table 22.3).Furthermore, if we consider that there are many different rele-vant measurement dimensions (e.g., frequency, duration, in-tensity) for target behaviors and contextual factors, the number
of possible interactions rapidly becomes unwieldy
TABLE 22.3 Interactions Among Basic Target Behavior and Causal Factor Categories
Mode of responding Cognitive- Affective- Causal Variable Type Verbal Physiological Overt-Motor Social-
environmental Intrapersonal
Trang 12Behavioral Assessment Methods 517
A behavior therapist cannot systematically assess all
possi-ble interactions among target behaviors and contextual factors
and incorporate them into a functional analysis Thus, he or
she must decide which of the many interactions are most
rele-vant for treatment design—that is, most important,
control-lable, and causal These a priori clinical decisions are similar
to presuppositions to the “causal field” described by Einhorn
in his study of clinical decision making (1988, p 57)
Causal presuppositions used by behavior therapists to
reduce the complexity of assessment data have not been well
evaluated, and as a result are not well understood (S C
Hayes & Follette, 1992; Krasner, 1992) We have argued,
however (cf Haynes & O’Brien, 2000), that training and
clinical experience exert a strong influence on the types of
variables that are incorporated into a functional analysis
Suppose, for example, that you are once again the
hypotheti-cal client that we have discussed at various points in this
chapter If you selected a behavior therapist with a strong
training history in cognitive therapy, he or she may
presup-pose that your worries and other target behaviors are caused
by maladaptive thoughts that provoke autonomic activation
His or her topographical description and functional analyses
may then tend to emphasize the measurement of
verbal-cognitive modes of responding Alternatively, a behavior
therapist with training and experience in behavioral marital
therapy may presuppose that dysfunctional communication
patterns and consequent increases in daily stress are the most
relevant causal variables in target behaviors His or her
topo-graphical description and functional analysis may thus
em-phasize interpersonal-social interactions as key precipitants
of marital distress
A second factor that can influence presuppositions to the
causal field among behavior therapists is research For
exam-ple, an extensive literature on the functional analysis of
self-injurious behavior has provided evidence that four major
classes of controlling variables often exert substantial causal
influences on the target behavior In addition, researchers in
this area have published laboratory assessment protocols
and functional analytic self-report inventories (e.g., Carr &
Durand, 1985; Durand & Crimmins, 1988) Thus, a behavior
therapist who is preparing to conduct an assessment of a
client with self injurious behavior could use the published
lit-erature to partially guide decisions about which variables
should be operationalized and incorporated into a functional
analysis
Although presuppositions to the causal field are necessary
for simplifying what would otherwise be an impossibly
complex assessment task, behavior therapists must guard
against developing an excessively narrow or inflexible set of
a priori assumptions because inadequate searches for causal
relationships and incorrect functional analyses are morelikely to occur under these conditions A few precautionarysteps are thus advised First, it is important that behaviortherapists routinely evaluate the accuracy of their clinicalpredictions and diagnoses (Arkes, 1981; Garb, 1989, 1998).Second, behavior therapists should frequently discuss caseswith colleagues and supervisors in order to obtain alternativeviewpoints and to guard against biasing heuristics Third,regular reading of the published literature is advised Finally,behavior therapists should regularly evaluate hypothesesabout the function of target behaviors (using single-subjectevaluations or group designs) and attend conferences orworkshops in which new information about target behaviorsand causal factors can be acquired
Identifying and Evaluating Causal Relationships
After topographical descriptions have been rendered and thecausal field has been simplified, the behavior therapist must at-tempt to distinguish causal relationships out of a large family
of functional relationships between target behaviors and textual factors This identification of causal relationships is im-portant because many interventions are aimed at modifyingthe cause of a problem behavior The critical indicator of a pos-sible casual relationship is the presence of reliable covariationbetween a target behavior and contextual factor combined withtemporal precedence (i.e., evidence that changes in the causalfactor precede changes in the target behavior) To further dif-ferentiate causal relationships from noncausal relationships,the behavior therapist should be able to apply a logical expla-nation for the observed relationship and exclude plausiblealternative explanations for the observed relationship (Cook &Campbell, 1979; Einhorn, 1988; Haynes, 1992)
con-Several behavioral assessment methods can be used to uate covariation among variables and to assist with the differ-entiation of causal relationships from noncausal relationships.Additionally, two predominant approaches to data evaluationare typically used; intuitive judgment and statistical testing Inthe following section, methods used to collect assessment dataand methods used to evaluate assessment data are reviewed
eval-BEHAVIORAL ASSESSMENT METHODS:
SAMPLING, DATA COLLECTION, AND DATA EVALUATION TECHNIQUES
Given that target behaviors and causal factors have been quately operationalized, the behavior therapist must thendecide how to collect data on these variables and the relation-ships among them These decisions are designed to address
Trang 13ade-two interrelated assessment issues: (a) sampling—how and
where the behavior and causal factors should be measured and
(b) what specific techniques should be used to gather
informa-tion The overarching concern in these decisions is validity—
simply put, the extent to which specific sampling strategies
and assessment methods will yield information that accurately
represents client behavior and the effects of causal variables in
naturalistic contexts The various strategies used to gather this
information and the relative advantages and disadvantages of
each are described in the following section
Sampling
The constant change that characterizes behavior stems from
variation in causal factors that are nested within specific
con-texts Because we cannot observe variation in all behaviors
and all causal factors within all contexts, sampling strategies
must be used in any behavioral assessment A major
consid-eration in deciding upon a sampling system is degree of
gen-eralizability across situations and time Specifically, we are
often interested in gathering data that will allow us to validly
infer how a client behaves in the natural environment (Paul,
1986a, 1986b) Thus, we must carefully consider how and
where assessment data will be collected to maximize
ecolog-ical validity Issues related to behavior and casual factor
sam-pling are described later in this chapter
Event and Time Sampling
Target behaviors and causal events can be sampled in
innu-merable ways An analysis of the behavioral assessment
literature, however, indicates that there are five principal
be-havior sampling strategies most often used in applied
set-tings Each strategy has advantages and disadvantages as well
as unique sources of error
Event sampling refers to a procedure in which the
occur-rence of a target behavior or causal event is recorded
whenever it is observed or detected For example, when
con-ducting a classroom observation, we might record each
occur-rence of an aggressive act emitted by a child (target behavior)
and the nature (e.g., positive attention, negative attention, no
discernible response) of teacher responses, peer responses, or
both to the aggressive act (possible causal factor)
An estimate of frequency is most often calculated using
event sampling procedures Frequency estimates are simply
the number of times the behavior occurs within a particular
time interval (e.g., hours, days, or weeks) Event recording is
most appropriate for target behaviors and causal events that
have distinct onset and offset points
Duration sampling is designed to sample the amount
of time that elapses between the onset and offset of target
behaviors and causal factors Returning to the tioned classroom observation example, we might be inter-ested in not only how often aggressive actions occur, but alsohow long they persist after they have been initiated
aforemen-Interval sampling procedures involve partitioning timeinto discrete intervals lasting from several seconds to severalhours In partial-interval sampling, an entire interval isrecorded as an occurrence if the target behavior is observed
for any proportion of the interval For example, if the child
emits any aggressive act within a prespecified interval (e.g.,during a 5-min observation period), the complete interval isrecorded as an occurrence of the behavior In whole-intervalsampling, the target behavior must be emitted for the entireobservation period before the interval is scored as an occur-rence Returning to the aggressive child example, we may de-cide to record an occurrence of target behavior only when theaggressive act continues across the entire 5-min observationperiod
Partial- and whole-interval sampling strategies are mended for target behaviors that have ambiguous onset andoffset points They are also well-suited for target behaviorsthat occur at such a high rate of frequency that observers couldnot reliably record each occurrence One of the principle dif-ficulties with interval sampling is misestimation of behaviorfrequency and duration Specifically, unless the duration of abehavior exactly matches the duration of the interval and un-less the behavior begins and ends at the same time as the ob-servation interval, this sampling strategy will yield inaccurateestimates of behavior frequency and duration (Quera, 1990;Suen & Ary, 1989)
recom-Real-time sampling involves measuring real time at theonset and offset of each target behavior occurrence, causalfactor occurrence, or both A principal advantage of real-timerecording is that can simultaneously yield data about thefrequency and duration of target behavior and causal factoroccurrences Like event and duration sampling, real-timesampling requires distinct onset and offset points
Momentary time sampling is a sophisticated strategy that
is most often used to gather data on several clients in a ticular context such as a psychiatric unit or classroom Theprocedure involves (a) conducting a brief observation (e.g.,
par-20 s) of a client, (b) recording whether the target behavior orcausal factor occurred during that brief moment of observa-tion, and (c) repeating the first two steps for all clients beingevaluated In our classroom example, we might choose to ob-serve a few normal students in order to gain a better under-standing of the extent to which our client differs in terms ofaggressive action Thus, we would observe our client for abrief moment, then observe a comparison student for a briefinterval, return to observing our client, and so on In a sense,momentary time sampling is analogous to interval recording;
Trang 14Behavioral Assessment Methods 519
the primary difference is that several persons are being
observed simultaneously for very brief periods of time
Setting Sampling
Environmental determinism and the attendant assumption of
contextualism require that the behavior therapist carefully
select assessment settings, and—whenever possible—the
as-sessment should occur in multiple situations One dimension
that can be used to gauge assessment location is the degree to
which the locations represent the client’s natural
environ-ment At one end of the continuum is the naturalistic setting
Naturalistic contexts are settings where variation in target
behaviors and causal factors occur as a function of naturally
occurring and nonmanipulated contingencies Assessment
data collected in naturalistic settings are ecologically valid
and more readily generalizable to criterion situations One of
the principal limitations of naturalistic assessment is that the
inability to control target behavior or causal factor
occur-rences can preclude measurement of infrequent, clandestine,
or subtle behaviors or stimuli
At the other end of the continuum is the analog setting In
analog settings, the behavior therapist varies some aspect of
one or more hypothesized causal factors while observational
measures of the target behavior(s) are collected A number of
single-subject design strategies (e.g., ABAB, changing
crite-rion, multiple baseline) can then be used to evaluate the
di-rection and strength of the relationships between the causal
factors and target behaviors There are many different types
of analog observation, including role playing, marital
interac-tion assessments, behavioral approach tests, and funcinterac-tional
analytic experiments
In summary, sampling from natural settings allows for
mea-surement of target behaviors and causal factors in criterion
contexts Thus, generalizability and ecological validity are
en-hanced However, infrequent behaviors and an inability to
con-trol the occurrence of critical causal variables can introduce
significant limitations with this sampling strategy Assessment
in analog settings allows for measurement of infrequent target
behaviors because the assessor can introduce specific causal
variables that may bring about the behaviors’ occurrence
Because the analog setting is highly controlled, one cannot
know how well the assessed behavior represents behavior in
naturalistic contexts, which often contain multiple complex
causal factors
Assessment Methods
Our survey of behavior therapists indicated that the
more commonly reported behavioral assessment methods
were behavioral interviewing, rating scale and questionnaire
administration, behavioral observation, and self-monitoring
Furthermore, experimental functional analysis, although it isnot a frequently reported assessment method, appeared to bethe most reliable indicator of the degree to which a clinicianidentified him- or herself as behaviorally oriented In the fol-lowing section, these assessment methods are briefly de-scribed More extensive descriptions of these individualassessment methods can be found in several recently pub-lished texts on behavioral assessment and therapy (e.g.,Bellack & Hersen, 1998; Haynes & O’Brien, 2000; Shapiro
& Kratochwill, 2000; Speigler & Guevremont, 1998) as well
as specialty journals that publish articles on behavioral
assessment and therapy methods (e.g., Behavior Therapy,
Cognitive and Behavioral Practice).
Behavioral Assessment Interviewing
Behavioral interviewing differs from other forms of viewing primarily in its structure and focus (e.g., Sarwer &Sayers, 1998) Structurally, behavioral interviewing tends toconform with the goals of behavioral assessment identifiedearlier in the chapter Specifically, the assessor structuresquestions that prompt the client to provide information aboutthe topography and function of target behaviors Topograph-ical questions direct the client to describe the mode and para-meters of target behaviors, causal factor occurrences, or both.Functional questions direct the client to provide informationabout how target behaviors may be affected by possiblecausal factors
inter-Despite the fact that the interview is a very commonly usedmethod, very little is known about its psychometric properties(Nezu & Nezu, 1989; Sarwer & Sayers, 1998) For example,Hay, Hay, Angle, and Nelson (1979) and Felton and Nelson(1984) presented behavior therapists with videotaped inter-views of a confederate who was acting as a client They sub-sequently measured the extent to which the therapists agreed
on target behavior identification, causal factor identification,and treatment recommendations Low to moderate levels ofagreement were observed These authors suggested that theseresults indicated that behavioral interviews do not appear toyield similar judgments about target behavior topography andfunction However, these studies were limited because thetherapists could only evaluate information that was provided
in response another interviewer’s questions Thus, they couldnot follow up with clarifying questions or direct the client toprovide greater details about various aspects of the client’starget behavior This methodological limitation creates thestrong possibility that the observed agreement rates would
be substantially different if interviewers were allowed to usetheir own questioning strategies and techniques Further re-search is needed to improve our understanding of the psycho-metric properties of behavioral interviews
Trang 15Behavioral Observation
Systematic observations can be conducted by nonparticipant
observers and participant observers Because observation
re-lies on visual recording, this method is restricted to the
mea-surement of observable actions Nonparticipant observers are
trained observation technicians who record target behaviors
and causal factors using any of the aforementioned sampling
methods Professional observers, research assistants, and
vol-unteers have been used to collect observational data in
treat-ment outcome studies (Cone, 1999) Because nonparticipant
observers are essentially hired and trained to conduct
obser-vations, they are often able to collect data on complex
behav-iors, causal factors, and target behavior and casual event
sequences Although nonparticipant observation is a versatile
assessment method, it is infrequently used in nonresearch
clinical applications due to cost
Participant observers are persons who have alternative
re-sponsibilities and share a relationship with the client In most
cases, participant observers are family members, coworkers,
friends, or caregivers Because participant observers are
typ-ically persons who are already involved in the client’s life,
they are able to conduct observations many settings The
major drawback associated with participant observation is
limited focus and inaccuracy—that is, because participant
observers have multiple responsibilities, only a small number
of target behaviors and causal factors can be reliably and
accurately observed (Cone, 1999)
Self-Monitoring
As the name implies, self-monitoring is an assessment method
that relies on clients to systematically sample and record their
own behavior Because clients can access all three modes of
responding (cognitive, affective, overt-motor) in multiple
nat-uralistic contexts, self-monitoring has evolved into a popular
and sophisticated assessment method (e.g., see the special
section on self-monitoring in the December 1999 issue of
Psy-chological Assessment) To maximize accuracy, target
behav-iors must be clearly defined so that clients consistently can
record target behavior occurrence
Self-monitoring has many advantages as an assessment
method As noted previously, clients can observe all modes of
behaviors with self-monitoring Additionally, private
behav-iors are more readily measured with self-monitoring Finally,
self-monitoring has a reactive effect that often promotes
reductions in undesirable target behavior occurrence and
increases in desired target behavior occurrence (Korotitsch &
Nelson-Gray, 1999)
The principal limitations of self-monitoring are bias and
reactivity Specifically, a client may not accurately record
target behavior occurrence due to a number of factors, cluding expectations for positive or negative consequences,lack of awareness of target behavior occurrence, the cuingfunction of self-monitoring behavior, and application of cri-teria for target behavior occurrence that are different from thetherapist’s Additionally, noncompliance—and the resultantmissing data—can be problematic with self-monitoringprocedures (Bornstein, Hamilton, & Bornstein, 1986; Craske
in-& Tsao, 1999) This risk for noncompliance can be reduced,however, by involving the client in the development of theself-monitoring system and providing consistent reinforce-ment for compliance through regular review and discussion
of collected data
Questionnaires
Questionnaires have several strengths They are inexpensive,easily administered, and easily interpreted Furthermore,there are a vast number of questionnaires that can be used toevaluate a wide array of target behaviors (e.g., see Hersen &Bellack, 1988, for a compilation of behavioral assessmentinventories) Finally, questionnaires can be used for a num-ber of behavioral assessment goals, including operationaliza-tion, identification of functional relationships, and treatmentdesign
The most significant problem with questionnaires is thatthey are often worded in a context-free manner For example,many questionnaire items ask a client to rate agreement (e.g.,
strongly agree, agree, disagree, strongly disagree) with a
con-textuallynonbound statement about a target behavior (e.g.,
I often feel angry) Furthermore, many inventories sum
dis-tinct behaviors, thoughts, and affective states into a globalscore This aggregation of behavioral information is, ofcourse, contrary to the notion of operationalizing behaviorinto discrete and precise modes and dimensions Takentogether, the measurement limitations commonly found inquestionnaires make it very difficult to abstract critical in-formation about functional relationships Therefore, manyquestionnaires are minimally helpful for intervention design.They can, however, be helpful in establishing the socialsignificance of a target behavior and in tracking changes intarget behaviors across time
Summary of Assessment Methods
Different combinations of sampling and measurement gies can be used to gather information about the topographyand function of target behavior Event, duration, and real-time sampling are most applicable to target behaviors thathave distinct onset and offset points Conversely, interval
Trang 16strate-Behavioral Assessment Methods 521
sampling is more suitable for high-frequency behavior and
behaviors with ambiguous onset and offset points
Assess-ment locations can range from naturalistic settings to
con-trolled analog settings Analog settings allow for enhanced
precision in target behavior measurement and the
measure-ment of infrequently occurring behaviors Alternatively,
nat-uralistic settings allow for enhanced generalizability and
evaluation of behavior in settings that present multiple and
complex stimuli
Behavioral interviewing, self-monitoring, and
question-naire administration can be used to assess all modes of target
behaviors In contrast, systematic observation is restricted to
the measurement of overt-motor behavior In addition to
dif-ferences in capacity for measuring target behavior mode, each
assessment method has advantages and disadvantages in its
convenience, cost, and validity (for more complete reviews of
the psychometric issues related to the various assessment
methods, see Cone, 1999; Haynes & O’Brien, 2000; Skinner,
Dittmer, & Howell, 2000)
The strengths and limitations of behavior sampling
strate-gies, setting sampling stratestrate-gies, and assessment methods
must be considered in the design and implementation of a
be-havioral assessment Because unique errors are associated
with each method, it is prudent to use a multimethod
assess-ment strategy Furthermore, it is beneficial to collect target
behavior data in multiple contexts
Methods Used to Identify Causal
Functional Relationships
The aforementioned assessment methods allow the behavior
therapist to collect basic information about the topography
of target behaviors and contextual factors Additional
infor-mation about functional relationships can be abstracted from
data yielded by these methods when logical or quantitative
decision-making strategies are applied The more common
strategies used to identify potential causal relationships are
reviewed in the following sections
Marker Variable Strategy
A marker variable is a conveniently obtained measure that is
reliably associated with the strength of a causal functional
relationship Empirically validated marker variables can be
derived from self-report inventories specifically designed to
identify functional relationships, structured interviews,
psy-chophysiological assessments, and role-playing exercises
The Motivational Assessment Scale for self-injurious
behav-ior (Durand & Crimmins, 1988) and the School Refusal
Assessment Scale (Kearney & Silverman, 1990) are two
examples of functional analytic questionnaires that have beenshown to predict causal relationships in naturalistic settings.Similarly, Lauterbach (1990) developed a structured inter-viewing methodology that can assist with the identification ofcausal relationships between antecedent events and target be-haviors An example of an empirically validated psychophys-iological marker variable is client response to the carbondioxide inhalation challenge In this case, it has been reliablyshown that patients with panic disorder—relative to controlswithout the disorder—are significantly more likely to experi-ence acute panic symptoms when they are asked to repeat-edly inhale air with high concentrations of carbon dioxide(Barlow, 1988; Clark, Salkovskis, & Chalkley, 1985) Thus,the patient’s responses to this test can be used as a marker forwhether the complex biobehavioral relationships that charac-terize panic disorder are operational for a particular client Fi-nally, Kern (1991) developed a standardized-idiographicrole-playing procedure in which setting-behavior relation-ships from recent social interactions are simulated and sys-tematically evaluated for the purposes of identifying causalfunctional relationships
Although the marker variable strategy can provide tant information about the presence of causal functional rela-tionships, only a few empirically validated marker variableshave so far been identified in the behavioral literature As aresult, behavioral assessors have tended to rely on unvalidatedmarker variables, such as verbal reports obtained duringbehavioral interviews (e.g., a patient diagnosed with posttrau-matic stress disorder may report that increased flashback fre-quency is caused by increased job stress), administration oftraditional self-report inventories, and in-session observation
impor-of setting-behavior interactions (e.g., a patient with a socialphobia shows increased sympathetic activation and topicavoidance when asked to describe feared situations), to iden-tify causal functional relationships
A major advantage of the marker variable strategy is ease
of application A behavior therapist can identify many tial causal functional relationships with a very limited invest-ment of time and effort For example, the number of markers
poten-of potential causal relationships that can be identified through
a single behavioral interview can be extensive
The most significant problem with using marker variables
to infer the presence of causal functional relationships is lated to generalizability Specifically, the extent to which un-validated marker variables such as patient reports, self-reportinventory responses, laboratory evaluations, and in-sessionsetting-behavior interactions correlate with actual causal rela-tionships between contextual factors and target behavior isoften unknown Additionally, for those instances in whichempirically validated marker variables are available, the
Trang 17re-magnitude of correlation between the marker variable and
actual causal relationships can vary substantially for an
indi-vidual client
Behavioral Observation and Self-Monitoring
of Context-Behavior Interactions
A second procedure commonly used by behavior therapists to
obtain basic information on causal relationships is systematic
observation of nonmanipulated context-behavior
interac-tions Most commonly, clients are instructed to self-monitor
some dimension of a target behavior (e.g., frequency or
magnitude) along with one or more contextual factors that
are thought to be exerting a significant influence on the
tar-get behavior Alternatively, direct observation of
setting-behavior interactions can be conducted by trained observers
or participant observers in naturalistic (e.g., the client’s
home, workplace) or analog (e.g., a therapist’s office,
labora-tory) environments (Foster, Bell-Dolan, & Burge, 1988;
Foster & Cone, 1986; Hartmann & Wood, 1990)
Self-monitoring and direct observation methods can yield
data that support causal inferences (Gottman & Roy, 1990)
However, these methods have two practical limitations First,
patients or observers must be adequately trained so that the
target behaviors and controlling factors are accurately and
reliably recorded Second, as the number or complexity of the
variables to be observed increases, accuracy and reliability
often decrease (Foster et al., 1988; Hartmann & Wood, 1990;
Paul, 1986a, 1986b) Taken together, these limitations
sug-gest systematic observation methods are best suited for
situa-tions in which the target behavior and contextual variables
are easily quantified and few in number
Experimental Manipulation
The third method that can be used to identify casual
relation-ships is experimental manipulation Experimental
manipula-tions involve systematically modifying contextual factors
and observing consequent changes in target behavior
topog-raphy These manipulations can be conducted in
natural-istic settings (e.g., Sasso et al., 1992), analog settings
(e.g., Cowdery, Iwata, & Pace, 1990; Durand & Crimmins,
1988), psychophysiological laboratory settings (e.g., Vrana,
Constantine, & Westman, 1992), and during assessment or
therapy sessions (Kohlenberg & Tsai, 1987)
Experimental manipulation has received renewed interest
in recent years because it can be an effective strategy for
iden-tifying specific stimulus conditions that may reinforce
prob-lematic behavior (Haynes & O’Brien, 2000) It can also be
time efficient and can conform to the pragmatic requirements
of outpatient settings while yielding information that tates effective intervention design For example, Iwata andcolleagues (Iwata et al., 1994) and Durand and colleagues(Durand, 1990; Durand & Crimmins, 1988) developed a stan-dardized protocol for conducting experimental manipulations
facili-to identify the function of self-injurious behavior In their tocols, clients with self-injurious behavior are evaluated undermultiple controlled analog observation conditions so that thefunction of the behavior can be identified One condition in-volves providing the client with social attention contingentupon the occurrence of self-injurious behavior (the client
pro-is ignored until the self-injurious behavior occurs, at whichpoint, she receives social attention) A second condition in-volves providing tangible rewards (e.g., an edible reinforcer, amagazine) contingent upon the occurrence of self-injuriousbehavior A third condition involves providing opportunitiesfor negative reinforcement of self-injurious behavior (theclient is exposed to an unpleasant task that would be termi-nated when the self-injurious behavior occurs) Finally, in thefourth condition, the client’s level of self-injurious behavior isobserved while he or she is socially isolated It is presumedthat rates of self-injurious behavior in this final context occur
as a function of intrinsically reinforcing mechanisms such asopioid release, tension reduction, nocioceptive feedback, orany combination of these
Iwata et al (1994) summarized data from 152 functionalanalyses using the aforementioned protocol Based on visualdata inspection procedures, they judged which of the fourtypes of maintaining contexts were most closely associatedwith increased rates of self-injurious behavior This informa-tion was then used to guide treatment design Thus, if socialattention or tangible reinforcement contexts were associatedwith higher rates of target behavior, the intervention would bedesigned so that attention and access to preferred materialswere consistently provided when self-injurious behavior wasnot emitted by the client Alternatively, if the client exhibitedhigher rates of self-injurious behavior during the negative re-inforcement condition, the intervention would include proce-dures that provided negative reinforcement contingent uponnonperformance of self-injurious behavior (e.g., providing abreak when a client was engaged in an unpleasant task, giventhat self-injurious behavior did not occur) Finally, if theclient exhibited higher rates of self-injurious behavior duringintrinsic reinforcement conditions, the intervention wouldprovide alternative sources of self-stimulation, differentialreinforcement of other behavior (sensory stimulation de-livered contingent upon performance of non-self-injuriousbehaviors), or response interruption procedures
Results from Iwata et al.’s (1994) study indicated that 80%
of the treatments based on the results of functional analyses
Trang 18Behavioral Assessment Methods 523
were successful (operationally defined as achieving
self-injurious behavior rates that were at or below 10% of those
observed during baseline) Alternatively, interventions not
based on the functional analyses were described as having
less adequate outcomes Other researchers have supported
these general findings (Carr, Robinson, & Palumbo, 1990;
Derby et al., 1992)
Despite the potential treatment utility of experimental
ma-nipulations, several questions remain unanswered First, the
psychometric properties (e.g., reliability, validity) of analog
observation are largely unexplored and—as a result—largely
unknown Second, an estimate of the incremental effect that
analog observation has on treatment outcomes has not yet
been adequately estimated Finally, most demonstrations of
the treatment utility of analog observation have been limited
to a very restricted population of clients who were presenting
with a restricted number of behavior problems Thus,
appar-ent treatmappar-ent utility of this procedure for idappar-entifying the
function of behavior may not adequately generalize to other
patient populations, problem behaviors, and settings
In summary, marker variables, behavioral observation of
naturally occurring context-behavior interactions, and
exper-imental manipulations can be used to identify potential causal
functional relationships The strength of causal inference
associated with each method tends to vary inversely with
clinical applicability Experimental manipulations and
be-havioral observation of naturally occurring setting-behavior
interactions yield data that support strong causal inferences
However, each method requires either a significant
invest-ment of time and effort, or only a few target behaviors and
controlling factors can be evaluated In contrast, the marker
variable strategy typically supports only weak causal
infer-ences, yet it is easily applied and can provide information on
a broad range of potential causal relationships
Methods Used to Estimate the Magnitude
of Causal Functional Relationships
After a subset of hypothesized causal functional relationships
have been identified using marker variables, observation,
ex-perimentation, or any combination of these techniques, the
behavior therapist needs to estimate the magnitude of
rela-tionships There are two primary methods available for
ac-complishing this task
Intuitive Evaluation of Assessment Data
In an effort to determine the clinical activities of behavior
therapists, part of a survey of AABT members (described
earlier) requested that information be provided about how
assessment data were typically evaluated Results indicatedthat the respondents used subjective evaluation and visualexamination of graphs to evaluate assessment data signifi-cantly more often than they used any statistical techniquesuch as computing measures of central tendency, variance, orassociation
Some have argued that intuitive data evaluation is anappropriate—if not preferred—method for evaluating behav-ioral assessment data The primary strengths associated withthis method are that (a) it requires only a modest investment
of time and effort on the part of the behavioral clinician,(b) an intuitive approach is heuristic—it can promote hypoth-esis generation, and (c) intuitive approaches are well suitedfor evaluating complex patterns of data An additional argu-ment supporting intuitive evaluation is associated with clini-cal significance Specifically, it has been argued that visualinspection is conservatively biased, and as a result, determi-nations of significant effects only will occur when the causalrelationship is of moderate to high magnitude
Matyas and Greenwood (1990) have challenged thesesupportive arguments by demonstrating that intuitive evalua-tion of data can sometimes lead to higher rates of Type I errorwhen data are autocorrelated (i.e., correlation of the data withitself, lagged by a certain number of observations) and whenthere are trends in single-subject data A similar finding wasreported by O’Brien (1995) In his study, graduate studentswho had completed course work in behavioral therapy wereprovided with a contrived set of self-monitoring data pre-sented on three target behaviors: headache frequency, inten-sity, and duration The data set also contained informationfrom three potentially relevant causal factors: hours of sleep,marital argument frequency, and stress levels The data wereconstructed so that only a single causal factor was strongly
correlated (i.e., r>.60) with a single target behavior (theremaining correlations between causal variables and targetbehaviors were of very low magnitude)
Students were instructed to (a) evaluate data as they cally would in a clinical setting, (b) estimate the magnitude ofcorrelation between each causal factor and target behavior,and (c) select the most highly associated causal factor foreach target behavior Results indicated that the students pre-dominantly used intuitive evaluation procedures to estimatecorrelations Additionally, the students substantially underes-timated the magnitude of the strong correlations and overes-timated the magnitude of weak correlations In essence, theydemonstrated a central tendency bias, guessing that two vari-ables were moderately correlated Finally—and most impor-tant—the students only were able to correctly identify themost important causal variable for each target behavior about50% of the time
Trang 19typi-In our AABT survey, we further evaluated the potential
limitations of intuitive data evaluation methods Similar to
the O’Brien (1995) study, we created a data set that contained
three target behaviors and three potential causal variables in
a three-by-three table The correlation between each pair of
target behaviors and casual factor was either low (r = 1),
moderate (r = 5), or high (r = 9) Participants were then
in-structed to identify which of the three possible causal
vari-ables was most strongly associated with the target behavior
Results indicated that when the true correlation between the
target behavior and casual factor was either low or moderate,
the participants were able to correctly identify the causal
vari-able at levels that were slightly better than chance (i.e., 55%
and 54% correct identification, respectively) This finding
replicated those reported by O’Brien (1995) when graduate
students comprised the study sample When the true
correla-tion was high, the participants’ performance rose to 72% It is
interesting to note that this improved performance was not
consistent across tasks—that is, a correct identification of the
causal variable in one pair of variables did not appear to be
substantially associated with the likelihood of generating a
correct answer on a different pair of variables
Taken together, these results suggest that intuitive
evalua-tion of behavioral assessment data is susceptible to
misesti-mation of covariation, which (as noted earlier) is a foundation
for causal inference As Arkes (1981) has argued, when
conducting an intuitive analysis of data similar to those
de-scribed previously, many clinicians tend to overestimate the
magnitude of functional relationships or infer an illusory
cor-relation (Chapman & Chapman, 1969) One reason for this
phenomenon is that confirmatory information or hits (i.e.,
in-stances in which the causal variable and hypothesized effect
co-occur) are overemphasized in intuitive decision making
relative to disconfirming information such as false-positive
misses
A number of other biases and limitations in human
judg-ment as it relates to causal inference have been identified (cf
Einhorn, 1988; Elstein, 1988; Garb, 1998; Kanfer & Schefft,
1988; Kleinmuntz, 1990; also see the chapter by Weiner in
this volume) A particularly troubling finding, however, is that
a clinician’s confidence in his or her judgments of covariation
and causality increase with experience, but accuracy remains
relatively unchanged (Arkes, 1981; Garb, 1989, 1998)
In summary, intuitive data evaluation approaches can
be convenient and useful for hypothesis generation
Funda-mental problems emerge, however, when behavior therapists
intuitively estimate the magnitude of covariation between
hypothesized contextual variables and target behaviors This
problem is compounded by the fact that multiple behaviors,
multiple causes, and multiple interactions are encountered
in a typical behavioral assessment It is thus recommended
that statistical tests be conducted whenever possible toevaluate the strength of hypothesized causal functionalrelationships
Quantitative Evaluation of Assessment Data
One of the most clinically friendly methods for evaluatingassessment data is the conditional probability analysis—astatistical method designed to evaluate the extent to whichtarget behavior occurrence (or nonoccurrence) is conditionalupon the occurrence (or nonoccurrence) of some other vari-able Specifically, the behavior therapist evaluates differ-ences in the overall probability that the target behavior willoccur (i.e., base rate or unconditional probability) relative tothe probability that the target behavior will occur, given thatsome causal factor has occurred (i.e., the conditional proba-bility) If there is significant variation among unconditionaland conditional probabilities, the behavior therapist con-cludes that the target behavior and causal factor are function-ally related
A broadly applicable and straightforward strategy for ducting a conditional probability analysis involves construct-ing a two-by-two table with target behavior occurrence (andnonoccurrence) denoting the columns and the causal factorpresence (and absence) denoting the rows (see Table 22.4) Toillustrate, we can return to our imagined client The columnscan be constructed so that they denote whether the client rated
con-a pcon-articulcon-ar dcon-ay con-as consisting of high or low levels of
check-ing Let A= a clinically significant elevation in the frequency
of checking for cancer tumors by touching your chest,
B = level of perceived stress at work, and P = probability A
functional relationship tentatively would be inferred if theprobability of experiencing heightened checking on a stress-
ful day, P(A / B), is greater than the base rate probability of checking, P(A).
Conditional probability analyses have important strengthsand limitations First, only a modest number of data pointscan yield reliable estimates of association (Schlundt, 1985)
TABLE 22.4 A Two-by-Two Contingency Table for Context-Behavior Evaluation
Target Behavior Present Absent
B + C + D Conditional probabilities: Probability of target occurrence given causal variable presence: A/A + B, probability of target occurrence given causal variable absence: C/C + D, probability of target nonoccur- rence given causal variable presence: B/A + B, and probability of target occurrence given causal variable presence: D/C + D.
Trang 20Summary and Conclusions 525
Second, the statistical concepts underlying the methodology
are easily understood Third, many statistical packages can be
used to conduct conditional probability analyses, or if none
are available, the computations can be easily done by hand
(e.g., Bush & Ciocco, 1992) Fourth—and most important—
the procedure is easily incorporated into a clinical setting and
clients can participate in the data evaluation process
Specifi-cally, we have found that a two-by-two table that presents
information about target behavior occurrence, given the
pres-ence or abspres-ence of causal variable occurrpres-ence, can be readily
constructed and interpreted in a clinical session A limitation,
however, is that conditional probability analyses can evaluate
the interactions among only a small number of variables
Fur-thermore, because it is a nonparametric technique, it can be
used only when the controlling variables and target behaviors
are measured using nominal or ordinal scales
Analysis of variance (ANOVA), t tests, and regression are
conventional statistical techniques that can be used to
evalu-ate causal functional relationships when data are collected on
two or more variables For example, in a multiple-baseline
design (e.g., AB, ABAB), the clinician can conduct t tests,
ANOVA, or regression to determine whether the levels of
tar-get behavior occurrence differs as a function of contexts in
which a causal factor is present (B) relative to contexts in
which it is absent (A) The primary advantage of using t tests,
ANOVA, and regression is that these procedures are well
known to most behavior therapists who have received
gradu-ate training The main disadvantage is that estimgradu-ates of t and
F are spuriously inflated when observational data are serially
dependent (Kazdin, 1998; Suen & Ary, 1989) This inflation
of t and F is not trivial For example, Cook and Campbell
(1979) noted that an autocorrelation of 7 can inflate a t value
by as much as 265% Thus, prior to using t tests, ANOVA, or
regression, the clinician must determine whether data are
substantially autocorrelated, and if they are, procedures must
be used to reduce the level of autocorrelation (e.g., randomly
select data from the series, partition out the variance
attribut-able to autocorrelation)
Time series analyses involve taking repeated measures of
the target behavior and one or more contextual factors across
time An estimate of the relationships among these variables
is then calculated after the variance attributable to serial
de-pendency is partitioned out (Gaynor, Baird, & Nelson-Gray,
1999; Matyas & Greenwood, 1996; Wei, 1990) When
assessment data are measured with nominal or ordinal scales,
lag sequential analysis can be used to evaluate functional
relationships (Gottman & Roy, 1990) Alternatively with
interval and ratio data, other time series methodologies such
as autoregressive integrated moving averages (ARIMA)
mod-eling and spectral analysis can be used (Cook & Campbell,
1979; McCleary & Hay, 1980; Wei, 1990)
Time series methods can provide very accurate tion about the magnitude and reliability of causal functionalrelationships They can also be used to examine the effects ofcontrolling variables on target behaviors across different timelags However, their applicability is limited because (a) alarge number of data points is necessary for a proper analysis,and (b) most behavior therapists will be able to analyze rela-tionships among a small number of variables The first limi-tation can be reduced when the behavior therapist designs anassessment that yields a sufficient number of data points Theimpact of the second limitation can be diminished if thebehavior therapist carefully selects the most relevant targetbehaviors and causal factors using rational presuppositionsand theory
informa-SUMMARY AND CONCLUSIONS
Behavioral assessment is a paradigm that is founded on anumber of assumptions related to the nature of problembehavior and the ways that it should be measured The overar-ching assumptions of empiricism and environmental deter-minism have been augmented by additional assumptions thatarose from new developments in theory and research in the be-havioral sciences This broadening of assumptions occurredalong with advancements in our understanding about thecauses and correlates of target behavior As a result, behav-ioral conceptualizations of target behavior have become in-creasingly complex, and contemporary behavior therapistsmust be able to identify and evaluate many potential func-tional relationships among target behaviors and contextualfactors Part of the ability to accomplish this task relies on asound knowledge of (a) the different dimensions of topogra-phy that can be quantified, (b) the multiple ways that contex-tual variables and target behaviors can interact for a particularbehavior disorder, and (c) one’s own presuppositions anddecisional strategies used to narrow causal fields
In addition to conceptual foundations, familiarity with cific sampling and assessment methods and strategies foridentifying functional relationships (e.g., the marker variablestrategy, observation and self-monitoring of naturally occur-ring setting-behavior interactions, and experimental manipu-lation) are required to empirically identify causal functionalrelationships Each method has strengths and limitations re-lated to the strength of causal inference that can be derivedfrom the collected data and the degree of clinical applicability.After basic assessment data on hypothesized causal func-tional relationships have been collected, intuitive and statisti-cal procedures can be used to evaluate the magnitude ofassociation Intuitive approaches are well suited for hypothe-sis formation As a method for estimating the magnitude of
Trang 21spe-covariation among variables, however, intuition is often
inac-curate Statistical approaches can provide unbiased
informa-tion on the strength of funcinforma-tional relainforma-tionships Condiinforma-tional
probability analyses can be especially useful because they
require only a modest amount of data, are easily understood,
and are convenient to use The principal limitation of
statisti-cal approaches is that they are limited to the evaluation of
only a few variables; also, they appear to be incompatible
with typical clinical settings, given their low reported use
among behavior therapists
All of the aforementioned assessment principles have
been well developed in the behavioral assessment literature
However, our survey of behavior therapists suggests that
many do not conduct assessments that are consistent with all
of these principles For example, most therapists appear to
abide by behavioral assessment principles as these principles
apply to the operationalization and quantification of target
behaviors and contexts However, few behavior therapists
use quantitative decision aids to identify and evaluate the
magnitude of context-behavior associations Instead, they
ap-pear to rely predominantly on intuitive judgments of
covaria-tion and causacovaria-tion Factors that may account for this mixed
allegiance to behavioral assessment principles should be
more thoroughly explored Furthermore, in the coming years,
research examining training procedures must be conducted
that can be used to help clinicians learn and use quantitative
decision-making procedures
A final important question for future consideration is
the treatment utility of behavioral assessment in light of the
growing use of empirically supported protocols Specifically,
to what extent will individualized treatments that are based
on an idiographic behavioral assessment outperform
stan-dardized treatment protocols that require less intensive
pre-treatment assessments such as diagnostic interviews? Failure
to demonstrate significantly improved outcomes might create
a diminished need for individualized behavioral assessment
procedures Alternatively, there may be a heightened need for
behavioral assessment procedures that can help match
inter-ventions with client behavior problems and characteristics
In either case, there is a clear need to evaluate the treatment
utility of behavioral assessment in relation to both
idio-graphic treatment design and standardized treatment-client
matching
REFERENCES
American Psychiatric Association (1994) Diagnostic and
statisti-cal manual of mental disorders (4th ed.) Washington, DC:
American Psychiatric Association.
Arkes, H R (1981) Impediments to accurate clinical judgment and
possible ways to minimize their impact Journal of Consulting and Clinical Psychology, 49, 323–330.
Axelrod, S (1987) Functional and structural analyses of behavior: Approaches leading to the reduced use of punishment proce-
dures? Research in Developmental Disabilities, 8, 165–178.
Bandura, A (1981) In search of pure unidirectional determinants.
York: Pergamon Press.
Bellack, A S., & Hersen, M (1998) Behavioral assessment: A practical handbook (4th ed.) Needham Heights, MA: Allyn and
Bacon.
Bernstein, D A., Borkovec, T D., & Coles, M G H (1986) Assessment of anxiety In A Ciminero, K S Calhoun, & H E.
Adams (Eds.), Handbook of behavioral assessment (2nd ed.,
pp 353–403) New York: Wiley.
Bornstein, P H., Hamilton, S B., & Bornstein, M T (1986) monitoring procedures In A R Ciminero, C S Calhoun, & H E.
Self-Adams (Eds.), Handbook of behavioral assessment (2nd ed.,
pp 176–222) New York: Wiley.
Burish, T G., Carey, M P., Krozely, M G., & Greco, M G (1987) Conditioned side effects induced by cancer chemotherapy:
Prevention through behavioral treatment Journal of Consulting and Clinical Psychology, 55, 42–48.
Bush, J P., & Ciocco, J E (1992) Behavioral coding and sequential analysis: The portable computer system for observational use.
Behavioral Assessment, 14, 191–197.
Carey, M P., & Burish, T G (1988) Etiology and treatment of the psychological side effects associated with cancer chemotherapy.
Psychological Bulletin, 104, 307–325.
Carr, E G., & Durand, V M (1985) Reducing behavior problems
through functional communication training Journal of Applied Behavior Analysis, 18, 111–126.
Carr, E G., Robinson, S., & Palumbo, L R (1990) The wrong issue: Aversive versus nonaversive treatment; The right issue: Functional versus nonfunctional treatment In A C Repp &
S Nirbhay (Eds.), Perspectives on the use of nonaversive and aversive interventions for persons with developmental disabili- ties (pp 361–379) Sycamore, IL: Sycamore.
Chapman, L J., & Chapman, J P (1969) Illusory correlation as an
obstacle to the use of valid psychodiagnostic signs Journal of Abnormal Psychology, 74, 271–280.
Clark, D M., Salkovskis, P M., & Chalkley, A J (1985)
Respira-tory control as a treatment for panic attacks Journal of Behavior Therapy and Experimental Psychiatry, 16, 23–30.
Cone, J D (1988) Psychometric considerations and multiple els of behavioral assessment In A S Bellack & M Hersen
Trang 22mod-References 527
(Eds.), Behavioral assessment: A practical handbook (3rd ed.,
pp 42–66) Elmsford, NY: Pergamon Press.
Cone, J D (1999) Observational assessment: Measure
develop-ment and research issues In P C Kendall, J N Butcher, & G N.
Holmbeck (Eds.), Handbook of research methods in clinical
psychology (2nd ed., pp 183–223) New York: Wiley.
Cook, T D., & Campbell, D T (1979) Quasi-experimentation:
Design and analysis issues for field settings Chicago: Rand
McNally.
Correa, E I., & Sutker, P B (1986) Assessment of alcohol and drug
behaviors In A R Ciminero, K S Calhoun, & H E Adams
(Eds.), Handbook of behavioral assessment (2nd ed., pp 446–
495) New York: Wiley.
Cowdery, G E., Iwata, B A., & Pace, G M (1990) Effects and side
effects of DRO as treatment for self-injurious behavior Journal
of Applied Psychology, 23, 497–506.
Craighead, W E., Kazdin, A E., & Mahoney, M J (1981)
Behav-ior modification: Principles, issues, and applications (2nd ed.).
Boston: Houghton Mifflin.
Craske, M G., Barlow, D H., & O’Leary, T A (1992) Mastery of
your anxiety and worry San Antonio, TX: The Psychological
Corporation.
Craske, M G., & Tsao, J C I (1999) Self-monitoring with panic
and anxiety disorders Psychological Assessment, 11, 466–479.
Derby, K M., Wacker, D P., Sasso, G., Steege, M., Northrup, J.,
Cigrand, K., & Asmus, J (1992) Brief functional analysis
tech-niques to evaluate aberrant behavior in an outpatient setting:
A summary of 79 cases Journal of Applied Behavior Analysis,
25, 713–721.
Dougher, M J (2000) Clinical behavior analysis Reno, NV:
Context Press.
Durand, V M (1990) Severe behavior problems: A functional
com-munication training approach New York: Guilford Press.
Durand, M V., & Carr, E G (1991) Functional communication
training to reduce challenging behavior: Maintenance and
appli-cation in new settings Journal of Applied Behavior Analysis, 24,
251–264.
Durand, V M., & Crimmins, D (1988) Identifying the variables
maintaining self-injurious behavior Journal of Autism and
Developmental Disorders, 18, 99–117.
Einhorn, H J (1988) Diagnosis and causality in clinical and
statis-tical prediction In D C Turk & P Salovey (Eds.), Reasoning,
inference, and judgment in clinical psychology (pp 51–70) New
York: Free Press.
Elliot, A J., Miltenberger, R G., Kaster-Bundgaard, J., & Lumley,
V (1996) A national survey of assessment and therapy
tech-niques used by behavior therapists Cognitive and Behavior
Practice, 3, 107–125.
Elstein, A S (1988) Cognitive processes in clinical inference and
decision making In D C Turk & P Salovey (Eds.), Reasoning,
inference, and judgment in clinical psychology (pp 17–50) New
York: Free Press.
Evans, I M (1985) Building systems models as a strategy for
tar-get behavior selection in clinical assessment Behavioral ment, 7, 21–32.
Assess-Felton, J L., & Nelson, R O (1984) Inter-assessor agreement
on hypothesized controlling variables and treatment proposals.
Foster, S L., Bell-Dolan, D J., & Burge, D A (1988) Behavioral
observation In A S Bellack & M Hersen (Eds.), Behavioral assessment: A practical handbook (3rd ed., pp 119–160).
Elmsford, NY: Pergamon Press.
Foster, S L., & Cone, J D (1986) Design and use of direct vation systems In A R Ciminero, C S Calhoun, & H E.
obser-Adams (Eds.), Handbook of behavioral assessment (2nd ed.,
pp 253–324) New York: Wiley.
Garb, H N (1989) Clinical judgment, clinical training, and
profes-sional experience Psychological Bulletin, 105, 387–396 Garb, H N (1998) Studying the clinician: Judgment research and psychological assessment Washington, DC: American Psycho-
logical Association.
Gaynor, S T., Baird, S C., & Nelson-Gray, R O (1999) tion of time-series (single subject) designs in clinical psychol- ogy In P C Kendall, J N Butcher, & G N Holmbeck (Eds.),
Applica-Handbook of research methods in clinical psychology (2nd ed.,
pp 297–329) New York: Guilford Press.
Goldfried, M R., & Kent, R N (1972) Traditional versus ioral assessment: A comparison of methodological and theoreti-
behav-cal assumptions Psychologibehav-cal Bulletin, 77, 409–420.
Gottman, J M., & Roy, A K (1990) Sequential analysis: A guide for behavioral researchers New York: Cambridge University
Press.
Grant, L., & Evans, A (1994) Principles of behavior analysis New
York: HarperCollins.
Hartmann, D P., & Wood, D D (1990) Observational methods In
A S Bellack, M Hersen, & A E Kazdin (Eds.), International handbook of behavior modification and therapy (2nd ed., pp 107–
138) New York: Plenum Press.
Hawkins, R P (1986) Selection of target behaviors In R O.
Nelson & S C Hayes (Eds.), Conceptual foundations of ioral assessment (pp 331–383) New York: Guilford Press.
behav-Hay, W M., behav-Hay, L R., Angle, H V., & Nelson, R O (1979) The reliability of problem identification in the behavioral interview.
Behavioral Assessment, 1, 107–118.
Hayes, S C., & Follette, W C (1992) Can functional analysis
provide a substitute for syndromal classification? Behavioral Assessment, 14, 345–365.
Hayes, S C., & Toarmino, D (1999) The rise of clinical behavior
analysis Psychologist, 12, 505–509.
Trang 23Hayes, S L., Nelson, R O., & Jarret, R B (1987) The treatment
utility of assessment: A functional analytic approach to evaluate
assessment quality American Psychologist, 42, 963–974.
Haynes, S N (1992) Models of causality in psychopathology:
Toward dynamic, synthetic, and nonlinear models of behavior
disorders New York: MacMillan.
Haynes, S N (2000) Behavioral assessment of adults In G.
Goldstein & M Hersen (Eds.), Handbook of psychological
as-sessment (3rd ed., pp 453–502) New York: Pergamon/Elsevier.
Haynes, S N., & O’Brien, W H (1990) Functional analysis in
behavior therapy Clinical Psychology Review, 10, 649–668.
Haynes, S N., & O’Brien, W H (2000) Principles and practice of
be-havioral assessment New York: Kluwer Academic/Plenum.
Hersen, M., & Bellack, A S (1988) Dictionary of behavioral
assessment techniques New York: Pergamon Press.
Hollandsworth, J G (1986) Physiology and behavior therapy:
Con-ceptual guidelines for the clinician New York: Plenum Press.
Iwata, B A., Kahng, S W., Wallace, M D., & Lindberg, J S.
(2000) The functional analysis model of behavioral assessment.
In J Austin & J Carr (Eds.), Handbook of applied behavior
analysis (pp 61–89) Reno, NV: Context Press.
Iwata, B A., Pace, G M., Dorsey, M F., Zarcone, J R., Vollmer, B.,
& Smith, J (1994) The function of self injurious behavior:
An experimental-epidemiological analysis Journal of Applied
Behavior Analysis, 27, 215–240.
James, L R., Mulaik, S A., & Brett, J M (1982) Causal analysis:
Assumptions, models, and data Beverly Hills: Sage.
Kanfer, F H., & Phillips, J (l970) Learning foundations of
behav-ior therapy New York: Wiley.
Kanfer, F H., & Schefft, B K (1988) Guiding the process of
ther-apeutic change Champaign: Research Press.
Kazdin, A E (1998) Research design in clinical psychology
(3rd ed.) Boston: Allyn and Bacon.
Kearney, C A., & Silverman, W K (1990) A preliminary analysis
of a functional model of assessment and treatment for school
refusal behavior Behavior Modification, 14, 340–366.
Kern, J M (1991) An evaluation of a novel role-playing
methodol-ogy: The standardized idiographic approach Behavior Therapy,
22, 13–29.
Kleinmuntz, B (1990) Why we still use our heads instead of
for-mulas: Toward an integrative approach Psychological Bulletin,
107, 296–310.
Kohlenberg, R J., & Tsai, M (1987) Functional analytic
psy-chotherapy In N Jacobson (Ed.), Psychotherapists in clinical
practice: Cognitive and behavioral perspectives (pp 388–443).
New York: Guilford Press.
Kohlenberg, R J., & Tsai, M (1991) Functional analytic
psy-chotherapy: Creating intense and curative therapeutic
relation-ships New York: Plenum Press.
Korotitsch, W J., & Nelson-Gray, R O (1999) An overview of
self-monitoring research in assessment and treatment
Psycho-logical Assessment, 11, 415–425.
Krasner, L (1992) The concepts of syndrome and functional
analysis: Compatible or incompatible Behavioral Assessment,
14, 307–321.
Lauterbach, W (1990) Situation-response questions for identifying the function of problem behavior: The example of thumb suck-
ing British Journal of Clinical Psychology, 29, 51–57.
Levey, A B., Aldaz, J A., Watts, F N., & Coyle, K (1991)
Articu-latory suppression and the treatment of insomnia Behavior Research and Therapy, 29, 85–89.
Matyas, T A., & Greenwood, K M (1990) Visual analysis of single-case time series: Effects of variability, serial dependence,
and magnitude of intervention effect Journal of Applied ior Analysis, 23, 341–351.
Behav-Matyas, T A., & Greenwood, K M (1996) Serial dependency in single-case time series In R D Franklin, D B Allison, & B S.
Gorman (Eds.), Design and analysis of single case research
stress disorder in a former prisoner of war Behavior tion, 15, 25–260.
Modifica-Morris, E K (1988) Contextualism: The world view of behavior
analysis Journal of Experimental Child Psychology, 46, 289–323 Nathan, P E., & Gorman, J M (Eds.) (1998) A guide to treatments that work New York: Oxford University Press.
Nelson, R O (1988) Relationships between assessment and
treat-ment within a behavioral perspective Journal of ogy and Behavioral Assessment, 10, 155–169.
Psychopathol-Nelson, R O., & Hayes, S C (1986) The nature of behavioral
assessment In R O Nelson & S C Hayes (Eds.), Conceptual foundations of behavioral assessment (pp 3–41) New York:
Guilford Press.
Nezu, A M., & Nezu, C M (1989) Clinical decision making in behavior therapy: A problem solving perspective Champaign,
IL: Research Press.
O’Brien, W H (1995) Inaccuracies in the estimation of functional
relationships using self-monitoring data Journal of Behavior Therapy and Experimental Psychiatry, 26, 351–357.
O’Brien, W H., & Haynes, S N (1995) A functional analytic approach to the assessment and treatment of a child with fre-
quent migraine headaches Session: Psychotherapy in Practice,
1, 65–80.
O’Brien, W H., & Haynes, S N (1997) Functional analysis In
G Buela-Casal (Ed.), Handbook of psychological assessment
(pp 493–521) Madrid, Spain: Sigma.
Trang 24References 529
O’Donohue, W (1998) Learning and behavior therapy Needham
Heights, MA: Allyn and Bacon.
O’Neill, R E., Horner, R H., Albin, R W., Storey, K., & Sprague,
J R (1990) Functional analysis of problem behavior: A
practi-cal assessment guide Sycamore, IL: Sycamore.
Paul, G L (1986a) Assessment in residential settings: Principles and
methods to support cost-effective quality operations Champaign,
IL: Research Press.
Paul, G L (1986b) The time sample behavioral checklist:
Obser-vational assessment instrumentation for service and research.
Champaign, IL: Research Press.
Peterson, L., Homer, A L., & Wonderlich, S A (1982) The
integrity of independent variables in behavior analysis Journal
of Applied Behavior Analysis, 15, 477–492.
Prochaska, J O (1994) Strong and weak principles for progressing
from precontemplation to action on the basis of twelve problem
behaviors Health Psychology, 13, 47–51.
Quera, V (1990) A generalized technique to estimate frequency and
duration in time sampling Behavioral Assessment, 12, 409–424.
Russo, D C., & Budd, K S (1987) Limitations of operant practice
in the study of disease Behavior Modification, 11, 264–285.
Sasso, G M., Reimers, T M., Cooper, L J., Wacker, D., Berg, W.,
Steege, M., Kelly, L., & Allaire, A (1992) Use of descriptive
and experimental analysis to identify the functional properties of
aberrant behavior in school settings Journal of Applied
Behav-ior Analysis, 25, 809–821.
Sarwer, D B., & Sayers, S L (1998) Behavioral interviewing.
In A S Bellack & M Hersen (Eds.), Behavioral assessment:
A practical guidebook (4th ed.) Needham Heights, MA: Allyn
and Bacon.
Schlundt, D G (1985) An observational methodology for
func-tional analysis Bulletin for the Society of Psychologists in
Addictive Behaviors, 4, 234–249.
Shapiro, E S., & Kratochwill, T R (1988) Behavioral assessment
in schools: Theory, research, and clinical foundations (2nd ed.).
New York: Guilford Press.
Shapiro, E S., & Kratochwill, T R (2000) Behavioral assessment
in schools: Theory, research, and clinical foundations (2nd ed.).
New York: Guilford.
Skinner, C H., Dittmer, K I., & Howell, L A (2000) Direct vation in school settings: Theoretical issues In E S Shapiro &
obser-T R Kratochwill (Eds.), Behavioral assessment in schools: Theory, research, and clinical foundations (2nd ed., pp 19–45).
New York: Guilford Press.
Smith, R G., Iwata, B G., Vollmer, T R., & Pace, G M (1992) On the relationship between self-injurious behavior and self-restraint.
Journal of Applied Behavior Analysis, 25, 433– 445.
Spiegler, M D., & Guevremont, D C (1998) Contemporary behavior therapy (3rd ed.) Pacific Grove: Brookes/Cole Sturmey, P (1996) Functional analysis in clinical psychology New
York: Wiley.
Suen, H K., & Ary, D (1989) Analyzing quantitative behavioral data Hillsdale, NJ: Erlbaum.
Swan, G E., & MacDonald, M L (1978) Behavior therapy in
prac-tice: A national survey of behavior therapists Behavior Therapy,
9, 799–801.
Taylor, J C., & Carr, E G (1992a) Severe problem behaviors related to social interaction: I Attention seeking and social
avoidance Behavior Modification, 16, 305–335.
Taylor, J C., & Carr, E G (1992b) Severe problem behaviors
related to social interaction: II A systems analysis Behavior Modification, 16, 336–371.
Vrana, S R., Constantine, J A., & Westman, J S (1992) Startle reflex modification as an outcome measure in the treatment of pho-
bia: Two case studies Behavioral Assessment, 14, 279–291.
Wahler, R G., & Fox, J J (1981) Setting events in applied ior analysis: Toward a conceptual and methodological expan-
behav-sion Journal of Applied Behavior Analysis, 14, 327–338 Wei, W S (1990) Time series Analysis: Univariate and multivari- ate methods Redwood City, CA: Addison-Wesley.
Trang 26CHAPTER 23
Assessing Personality and Psychopathology
with Projective Methods
DONALD J VIGLIONE AND BRIDGET RIVERA
531
PROBLEMS WITH DEFINITIONS AND DISTINCTIONS 531
PROBLEMS WITH COMMON METAPHORS
AND MODELS 533
The Blank Screen Metaphor 533
The X-Ray Metaphor 534
The Need for an Informed Conceptual Framework 535
THE BEHAVIORAL RESPONSE PROCESS MODEL 535
Self-Expressive and Organizational Components 535
The Projective Test Stimulus Situation 536
Processing the Stimulus Situation 536
The Free-Response Format 537
A Behavioral Approach to Validity 539
Functional Equivalence and Generalization 540
Conclusion 541
INTERPRETIVE AND CONCEPTUAL ISSUES 541
Synthetic, Configurational Interpretation 541 Psychological Testing, Not Psychological Tests 542 Self-Disclosure and Response Sets 542
Test or Method? 544 Contribution to Assessment Relative
REFERENCES 549
What are projective tests and what are their distinctive
char-acteristics? How should we understand and interpret them?
What do they add to assessment? The purpose of this chapter
is to address these questions by providing the reader with a
meaningful and comprehensive conceptual framework for
understanding projective tests This framework emphasizes a
response process that includes both self-expressive and
orga-nizational components, that is, what the respondent says and
how he or she structures the response The framework’s
im-plications for projective testing and the contributions of
pro-jective testing to assessment are addressed In the course of
this discussion, we hope to correct some common
misper-ceptions about projective tests and to establish a more
in-formed approach to projective tests, projective testing, and
assessment in general Other related topics include
implica-tions of the model for interpretation, using projective tests as
methods, controversies surrounding projective testing,
re-sponse sets, rere-sponse manipulation, and issues from a
histor-ical perspective
It is clear that projective tests have value in the assessment
process This chapter addresses their value within a broad
overview, incorporating projective tests and methods within a
single domain Encompassing all projective tests, as is thechallenge of this chapter, necessitates this inclusive, globalapproach and precludes detailed, test-specific characteriza-tions In general, we have reserved our comments about spe-cific tests to the Rorschach, Thematic Apperception Test(TAT), figure drawings, sentence completion tests, and theearly memory tests An evaluation of the specific strengthsand weaknesses of these or any other individual projectivemeasure awaits others’ initiatives
PROBLEMS WITH DEFINITIONS AND DISTINCTIONS
Anastasi and Urbina (1996) have characterized a projective
test as a “relatively unstructured task, that is, a task that permits
almost an unlimited variety of possible responses In order toallow free play to the individual’s fantasy, only brief, generalinstructions are provided” (p 411) This global, descriptive de-finition identifies some important elements of projective tests.Ironically, however, this definition and others like it impedeour understanding of the nature of projective tests when they
Trang 27are causally juxtaposed with so-called objective tests Without
pause, many American psychologists categorize tests
accord-ing to the traditional projective-objective dichotomy In
think-ing and communicatthink-ing about assessment instruments, these
psychologists treat the characteristics of each class of
instru-ment as mutually exclusive or as polar opposites For example,
because objective tests are thought of as unbiased measures,
projective tests, by default, are assumed to be subjective As
another example, because objective tests are seen as having
standardized administration and scoring, projective tests are
assumed to lack empirical rigor There are a number of reasons
that the projective-objective dichotomy leads to an
oversim-plified and biased understanding of projective tests First, the
projective-objective dichotomy often results in misleading
reductionism Instruments under the rubric of projective are
assumed to be uniform in content, purpose, and
methodol-ogy For example, all projective instruments are often
re-duced and treated as equivalent to a classic exemplar such as
the Rorschach Reducing all projective instruments to the
Rorschach ignores their incredible diversity Not only do
these tests target many different domains of functioning, but
they also employ a great variety of methodologies for the
purposes of inducing very different response processes For
example, early instruments included an indistinct speech
interpretation, word association, cloud perception,
hand-positioning perception, comic strip completion, and musical
reverie tests (Anastasi & Urbina; Campbell, 1957; Frank,
1939/1962; Murray, 1938) Moreover, this great variety
suggests that projective processes are ubiquitous and are
involved in many real-life behaviors
Second, the projective-objective dichotomy implies that
there are characteristics unique to each class of test, but these
supposed hallmarks are misleading For example, test
ele-ments identified as projective, such as the flexible response
format and ambiguous or incomplete stimuli, are employed
by tests generally considered to be models of objectivity and
quantification Murstein (1963) notes from the flexible
re-sponse format of some cognitive ability tests that “we learn a
great deal about the person who, on the vocabulary subtests
of the Wechsler Adult Scale of Intelligence, when asked to
give the meaning of the word ‘sentence,’ proceeds to rattle off
three or four definitions and is beginning to divulge the
dif-ferences between the connotations and denotations of the
word when he is stopped” (p 3) E Kaplan’s (1991) approach
to neuropsychological testing focuses on process, similar to
the response-process approach in projective testing
Simi-larly, Meehl points out the projective element of stimulus
am-biguity in self-report personality tests In his Basic Readings
on the MMPI: A New Selection on Personality Measurement
(1945/1980), Meehl notes that many Minnesota Multiphasic
Personality Inventory (MMPI) items, such as “Once in awhile I laugh at a dirty joke,” contain ambiguities At themost basic level, it is unclear whether “once in a while”refers to once a day, once a week, or once in a month.Third, the stereotypic juxtaposition of objective and pro-jective testing lends a pejorative connotation to projectivetests that suggests they lack objectivity This is misleading.Many projective tests are quantified and standardized interms of administration, and more should be If we take theexample of cognitive tests, the style or process of the re-sponse can be systematically observed, quantified, and stan-dardized This qualitative-to-quantitative test developmentstrategy is exactly the same procedure used in sophisticatedquantification of projective tests, as in the Rorschach Com-prehensive System (Exner, 1993) and the Washington Sen-tence Completion Test (Loevinger & Wessler, 1970) Suchapproaches can result in psychometrically sound quantifica-tion and standardization For example, Joy, Fein, Kaplan,and Freedman (2001) utilized this procedure to standardizeobservation of the Block Design subtest from the Wechslerscales Other research summarized by Stricker and Gold(1999) and Weiner (1999) indicates that behavioral obser-vation within projective tests can be used to elaborate previ-ously developed hypotheses and to synthesize inferencesabout the respondent These same authors also demonstratedthese tactics in case examples
Of course, quantification and reducing examiner bias, that
is variability introduced by examiners, are important goals inimproving psychological assessment Nonetheless, reducingexaminer variability is not the only goal of assessment and
is not equivalent to validity and utility Indeed, further search should address the extent to which the examiner’sinput is induced by the subject, as would be the case with rec-iprocal determinism, increasing the ecological validity ofprojective tests (Bandura, 1978; Viglione & Perry, 1991).Furthermore, one may speculate that overemphasis on elimi-nating examiner variability to achieve objectivity can in-crease test reliability at the expense of validity when it limitssalient observations by the examiner
re-Finally, projective and objective tests resemble each other
in that they share the same goal: the description of personality,psychopathology, and problems in living However, the di-chotomy highlights the differences in method and overlooksfundamental differences in their approach to understandingpersonality Later sections of this chapter will highlight some
of these differences As we shall see, the differences may bemore in the philosophy of the psychologist using the testsrather than in the tests themselves
The foregoing are only a few examples of the distortionsinvolved in the unexamined use of the projective-objective
Trang 28Problems with Common Metaphors and Models 533
dichotomy of tests Furthermore, this familiar dichotomy
damages the reputation of projective testing and misleads
students A more informed approach to projective testing is
needed Along those lines, we will juxtapose projective tests
against self-report tests in the remainder of this chapter
PROBLEMS WITH COMMON METAPHORS
AND MODELS
Like the distinction between projective and objective tests,
the common metaphors and models used to describe the
pro-jective response process can be grossly misleading The two
well-known metaphors of the projective response process are
the blank screen and the X-ray machine Each metaphor
con-tains an implicit theoretical model of projective testing that
shapes our understanding of the projective response process
In this section we critically examine both metaphors
The Blank Screen Metaphor
The most common and stereotypic metaphor is that of the
blank screen In this metaphor, a projective test stimulus is
portrayed as a blank screen or canvas upon which the
respon-dent projects his or her inner world (Anastasi & Urbina,
1996) In the reductionistic application of this metaphor,
response content is treated as a direct representation of the
respondent’s inner life For example, when a respondent
pro-jects his or her aggression onto the stimuli, the response
content contains aggressive themes as a result The examiner
then equates these aggressive themes with the personality
trait of aggression When taken to the extreme, the blank
screen metaphor has had two consequences on our approach
to projective tests: an overemphasis on response content and
an underappreciation for the role of the projective test
stimu-lus and the examination context By examination context we
mean the various situational factors as experienced by the
re-spondent These include the demands on the respondent
given the circumstances of the evaluation, the implicit and
explicit consequences of the examination, and the interaction
between the examiner and respondent
The blank screen metaphor suggests that the only
neces-sary components to projective test stimuli are ambiguity and
a lack of structure These components are thought to facilitate
response content, that is, the free expression of the
respon-dent’s internal world The more ambiguous and unstructured
the stimulus, the more it was presumed that the personality
would be directly expressed in the response Historically, this
simplistic view has led to an emphasis on response content
and to the interpretive viewpoint that the test was equivalent
to or symbolized an internal response or reality (Murstein,1963) Aspects of test responses are often seen as symbolic ofand equivalent to personality and constituted the basis forgrand interpretations Figure 23.1 presents a schematic forthis and other models
However, increasing the blankness (so to speak) of thescreen by increasing the ambiguity of the stimuli does notnecessarily produce more useful or valid information Re-search into the relationship among amount of ambiguity,structure of pictorial stimuli, and test validity has not led toconsistent findings (Murstein, 1961, 1963, 1965) For exam-ple, the blank TAT card produces relatively conventional re-sponses that are less revealing of the individual than are therest of the cards, all of which include a picture of either a per-son, a group of people, or some other scene Moreover, elim-inating the more recognizable and salient visual aspects of
the Rorschach stimuli (what Exner, 1996, called the critical
bits) does not lead to more productivity In fact, the available
research supports the view that the suggestive aspects of thestimulus, rather than the lack thereof, are what is important.Empirical data clearly demonstrate that the physical stimulus
is crucial (Exner, 1974, 1980; Murstein, 1961; Peterson &Schilling, 1983)
What we know about Herman Rorschach’s work in oping his test attests to the fact that it is not ambiguity or lack
devel-of structure that contributes to the test’s usefulness It appearsthat each stimulus plate was designed to contain visually rec-ognizable forms, or critical bits, along with some arbitrarycomponents (Exner, 1996, 2000) Rorschach may have in-cluded the arbitrary contours to interfere with the processing
of these suggestive, recognizable forms The plates were
projection
personality
passive personality
test as
an x-ray
personality (active processing)
Figure 23.1 Panel A: The theoretical model of the response process as gested by the blank screen metaphor; Panel B: The theoretical model of the response process as suggested by the X-ray metaphor; Panel C: The pro- posed problem-solving model of the response process.
Trang 29sug-carefully chosen, drawn, and redrawn so that many versions
existed before Rorschach finalized the designs Anyone who
has ever made inkblots has found that most products look
simply like inkblots and are not suggestive of other forms or
objects Thus, it seems that the stimulus plates were intended
to be provocative to respondents while also being just unclear
enough to engage respondents’ problem-solving skills This
inconsistency between the recognizable or suggestive
com-ponents of the stimulus plates and the more arbitrary forms is
critical because it constitutes a problem to be solved In this
sense, projective test stimuli have a clear purpose: to present
the respondent with a problem-solving task For example, a
major part of the Rorschach projective task is to reconcile
vi-sual and logical inconsistencies among blot details and
be-tween the blot and the object (or objects) seen It is the
idiosyncratic ways in which respondents solve the problem,
rather than merely the content they project onto a blank
screen, that reveals useful and valid information Thus,
un-derstanding projective stimuli as blank screens, rather than as
problems to be solved, is a fundamental misconception about
projective tests that can lead to inaccurate interpretations of
test behaviors
The X-Ray Metaphor
Another common metaphor is that of an X-ray machine In
this metaphor a projective test acts as an X-ray of the mind,
so to speak, that allows the interpreter to observe directly the
contents of the respondent’s mind (see Figure 23.1) Both
Frank (1939/1962) and Murray (1938) mentioned this image
in their seminal work so that it has historical precedents
However, like the blank screen metaphor, the X-ray metaphor
leads to a focus on response content and the way in which the
content directly represents personality More importantly,
the X-ray metaphor diminishes the role of the respondent in
the response process
Examining Frank’s (1939/1962) original work allows one
to achieve a more adequate understanding of his purpose for
using the X-ray metaphor When Frank first used it, he
com-pared learning about personality to the then-current
technolo-gies in medical and physical science that allowed one to study
internal anatomical structures through noninvasive
tech-niques However, Frank included a critical distinction
be-tween projective tests and medical tools, a distinction that is
typically excluded from today’s common understanding of the
X-ray metaphor Frank noted that personality, unlike the target
of an X-ray machine, is not a passive recipient of attention In
responding to projective test stimuli, personality does not
sim-ply cast a shadow of its nature onto a plate Rather, Frank
con-tended that personality is an active organizing process.
Despite having been written more than 60 years ago,Frank’s ideas reveal a complex and informed perspective onpersonality, one that is especially relevant to understandingthe nature of projective testing:
Personality is approachable as a process or operation of an vidual who organizes experience and reacts affectively to situa- tions This process is dynamic in the sense that the individual personality imposes upon the common public world of events (what we call nature), his meanings and significances, his organi- zation and patterns, and he invests the situations thus structured with an affective meaning to which he responds idiomatically (1939/1962, p 34)
indi-Frank went on to describe personality as a “dynamic nizing process.” He contrasted this subjective, synthetic, dy-namic process of personality to the objective, external,concrete reality of the world, including the host culture’sshared conventional experiences In Frank’s view, the world
orga-of culture also influences the personality and its ing of the external world but cannot account for personalityprocesses and behavior
understand-Later in the same paper, Frank described projective niques as essentially inducing the activity and processing ofthe personality:
tech-In similar fashion we may approach the personality and induce the individual to reveal his way of organizing experience by giv- ing him a field (objects, materials, experiences) with relatively little structure and cultural patterning so that the personality can project upon that plastic field his way of seeing life, his mean- ings, significances, patterns, and especially his feelings Thus,
we elicit a projection of the individual personality’s private world because he has to organize the field, interpret the material and react affectively to it More specifically, a projection method for study of personality involves the presentation of a stimulus- situation designed or chosen because it will mean to the subject, not what the experimenter has arbitrarily decided it should mean (as in most psychological experiments using standardized stim- uli in order to be “objective”), but rather whatever it must mean
to the personality who gives it, or imposes it, his private, syncratic meaning and organization (1939/1962, p 43)
idio-These quotes make it clear that the respondent’s tional style and affect are critical to the projective testingprocess, and that the process involves more than simplyadding content to a stimulus field Moreover, unlike self-report tests, projective test stimuli give respondents an op-portunity to express their organizational styles and affect.Thus, a projective test allows the examiner to observe per-sonality in action with cognitive, affective, interpersonal, andmeaning-making activities
Trang 30organiza-The Behavioral Response Process Model 535 The Need for an Informed Conceptual Framework
This critical review of traditional metaphors and models for
projective testing points to their serious shortcomings and
oversimplifications In contrast to a blank screen, projective
stimuli are more like problem-solving tasks In contrast to a
passive personality that unknowingly projects itself onto a
blank screen or that is examined with X-ray vision, personality
in projective testing is seen as a much more active, organizing,
and selective process Perhaps the most accurate portrayal of
projection is that the personality does not project light onto
the blank screen of the test, but rather, the test projects itself
through the active organizing process of the personality to the
response In other words, the individual’s personal
characteris-tics are observable in the refracted light—that is, the manner in
which the person responds to the test In sum, there is a need
for a broader and more informed conceptual framework for
understanding projective testing
From comparisons between the overt stimuli and response,
the interpreter infers the covert personality process This
input-processing-output sequence is the essence of our model
for projective testing and is presented in the next section
Such a framework goes beyond projection and response
con-tent by embracing a problem-solving perspective
THE BEHAVIORAL RESPONSE PROCESS MODEL
A problem-solving model leads us to approach personality as
a processor of information Rather than interpreting a
re-sponse as a symbolic representation of personality, we
inter-pret it in the context of the stimulus situation and used that
interpretation to build a model of the respondent’s processing
and problem-solving styles Rather than using a static
con-ceptualization of personality, our understanding incorporates
a model of personality as a problem-solving processor of
life’s ongoing challenges
The projective test response can be seen as the
develop-ment and formulation of a solution to a problem, the structure
and content of which reveals something about the individual
Every projective test involves a task, which we can
under-stand as a problem to be solved For example, the TAT
de-mands the creation of a story that reconciles the suggestive
elements of the pictures with ambiguous and missing cues
As another example, the early memory test involves
con-structing, typically without a complete sense of certainty, a
memory dating back to the beginning of one’s life The
self-expressive quality and the adequacy of these solutions can be
the object of the interpretive system (e.g., for the TAT, see
Ronan, Colavito, & Hammontree, 1993)
The history of projective testing and misuses in currentpractice reveal that we have drifted from the focus on input-processing-output as first described by Frank (1939/1962).This drift has led to two gross oversimplifications of projec-tive testing: (a) Projective test responses are inappropriatelyequated with personality, and (b) verbal and motor behaviorswithin projective test responses are thought to symbolizelarge patterns of life behavior In contrast, an informed re-sponse process approach entails inferring a model of an indi-vidual’s personality and behavior from projective test outputbased on a thorough understanding of the stimuli, task de-mands, and processing involved The future of projective as-sessment depends on advancing this response process andproblem-solving approach
The Standards for Educational and Psychological Tests
(American Educational Research Association, American chological Association [APA], & National Council on Mea-surement in Education, 1999) incorporate this interest in theresponse process According to the standards, evidence based
Psy-on examinatiPsy-on of the respPsy-onse process, including eye ments and self-descriptions of the respondent’s experience,should be used to validate tests inferences Response processresearch is extremely valuable as a basis for clinical inference(e.g., Exner, Armbruster, & Mittman, 1978) The responsecharacteristics of each commonly used projective test should
move-be researched and delineated Each projective test differs inits response process so that each test must be addressed andmastered separately, even if these tests share some commonprocesses and principles
Self-Expressive and Organizational Components
Within the response process in projective testing, two
com-ponents have traditionally been identified: (a) a content or
self-expressive component and (b) a formal or organizational
component Often these components are referred to as the
projective and problem-solving components of projective
tests, but these terms are subject to misinterpretation This
chapter refers to them as the self-expressive and
organiza-tional components of projective testing.
To oversimplify, the self-expressive component largely
involves content features of the response—that is, what thesubject says, writes, or draws and what associations the indi-vidual brings to the task Self-expression occurs becauseprojective stimuli provoke the imagination, acting as a stim-ulus to fantasy (Exner & Weiner, 1995; Goldfried, Sticker, &Weiner, 1971) Thus, respondents react to content sugges-tions in a task (a sentence stem, a picture, or a recogniz-able form or critical bit of a Rorschach plate) and rely onthemselves to go beyond that content to access and express
Trang 31information from their own stores of images, experiences,
feelings, and thoughts
In contrast, the organizational component involves the
formal or structural features of the response: how the
individ-ual answers the questions, solves the task, structures the
re-sponse, and makes decisions For example, the organizational
component includes how the stimulus details are
incorpo-rated into TAT or Rorschach responses and whether the
stim-ulus features are accurately perceived Use of detail and the
accuracy of the response are organizational features, which
can be applied to almost all projective tests Projective tests
all pose problems to solve; the adequacy, style, and structure
of the solutions to the problems are encompassed by the
organizational component
The common oversimplification in conceptualizing
pro-jective testing is to limit the scope of propro-jective testing to the
self-expressive component Doing so leads one to interpret
only response content themes Even if the organizational
component of a projective test is recognized, it is often
con-ceptualized as separate from the content component
We believe that separating the self-expressive and
organi-zational components is another misconception that should be
corrected If one examines the projective test respondent’s
real-time processing while solving the task and developing a
response, one observes that self-expressive and
organiza-tional aspects are simultaneous and interconnected One
solves the problem not only by organizing the input and the
output, but also by selecting one’s own self-expression to add
to the response From another perspective, including
self-expression is not merely a projection of a trait, need, or
per-ception Thus, we are making an important distinction here:
Problem-solving within projective tests encompasses both
content and formal, and both self-expressive and
organiza-tional, facets What are conventionally considered projective
or content /self-expressive components are actually best
un-derstood as part of a single problem-solving process Thus,
the respondent’s way of problem-solving may involve, for
example, invoking dependent themes A respondent’s adding
in certain thematic interpretations, motives, interests, or
fan-tasies to projective test responses thus is part of the
problem-solving component of these tests
Moreover, there may be individual differences, both within
an assessment and in one’s everyday life, in terms of how
much content is projected Some people may project more
personalized content than others Others who express less
per-sonalized content might be characterized as stereotyped,
overtly conventional (Schafer, 1954), or, alternatively, as
effi-cient and economical (Exner, 1993) We will elaborate this
problem-solving process as the centerpiece of this chapter We
rely on information-processing and behavioral approaches inspecifying its subcomponents
The Projective Test Stimulus Situation
In our view, the projective-testing stimulus encompasses acomplex of factors The stimulus in a projective test is morethan the concrete stimulus itself, that is, more than merely apicture, a sentence stem, a Rorschach plate, or an invitation toremember Masling’s (1960) work with the Rorschach and avariety of studies with the TAT (Murstein, 1961, 1963) revealthat situational, contextual, and interpersonal stimuli influ-ence the response process Extrapolating from these findings,
we propose that the actual stimulus for a projective test is the
entire situation, or what we call the stimulus situation Rather
than merely being concrete stimuli, the stimulus situation compasses the interpersonal interaction with the examiner,what the respondent is asked to do with the stimulus, andcontextual issues such as the reason for referral For example,the TAT stimulus situation involves the fact that the respon-dent is being called on to tell a story to reveal somethingabout him- or herself in front of another person, typically astranger with some authority and power, about whom the re-spondent knows very little Accordingly, when the stimulus isadministered individually there is also a strong interpersonalcomponent to the stimulus situation Furthermore, this inter-personal component is implicit in paper-and-pencil projec-tive tests It is also present in self-report tests of personality,although it is often ignored
en-A critical component of the stimulus situation is the spondent’s awareness of the obvious potential for the response
re-to reveal something of him- or herself Reactions re-to the sure to self-disclose are invoked by the stimulus situation Ac-cordingly, response sets, defensiveness, expression of socialdesirability, and response manipulation are fundamental to theresponse process As will be addressed later, these are morethan impediments or moderators of test validity
pres-Processing the Stimulus Situation
Taking all of these issues into consideration suggests that therespondent reacts to an overall situation, including both con-
crete and experiential components, as a pattern or field Such
patterning is a well-known fact in the study of human
percep-tion The respondent organizes that field into figure and
ground, responding more distinctly to the figural components
of the stimulus situation This figure-ground patterning existsnot only within the processing of the concrete projective teststimulus, but also with the entire stimulus situation Accurate
Trang 32The Behavioral Response Process Model 537
interpretation depends on considering the concrete stimuli
element in terms of, for example, Rorschach card pull,
sentence-stem characteristics, and salient stimuli components
for individual cards from storytelling tasks (Exner, 1996,
2000; Murstein, 1961, 1963; Watson, 1978) These prominent,
recognizable aspects of the concrete stimulus elicit common
or popular responses Peterson and Schilling (1983) have
writ-ten an informative, conceptual article that frames these issues
for the Rorschach Knowing the test and its input, processing,
and output characteristics provide a context within which to
understand the implications of responses for personality
Stan-dardization data and empirical descriptions, the examiner’s
experience with the stimulus situation, recognition of the
re-sponse pull for individual test stimuli, and knowledge of
con-ventional and common responses all contribute to optimally
valid interpretation
The Free-Response Format
Freedom in the Stimulus Situation
Freedom and lack of direction are crucial characteristics of
the projective test stimulus situation The individualistic
idio-graphic feature of the projective test response process starts
with the individual differences in the perception of the
stimu-lus situation (Colligan & Exner, 1985; Exner, 1980; Perry,
Felger, & Braff, 1998) The individual can choose to attend to
different components of the stimulus situation, focusing on, for
example, a particular element of the physical stimulus, a
de-mand within the task, or some interpersonal aspect related to
the task The individual may offer an overall gestalt, or may
focus on a single element or on inconsistencies between
stimu-lus subcomponents Accordingly, self-regulation through
stim-ulus control can be assessed through projective testing, in terms
of what an individual attributes to a stimulus, when one
identi-fies what the individual responds to in the stimulus situation
Another important, related feature of the processing of
the stimulus situation is decision making For example,
respon-dents must decide what to reveal or focus on within the story,
image, early memory, or sentence completion item Decision
making also requires reconciling contradicting elements and
completing unfinished information The projective test
stimu-lus situation does not provide much information to assist the
re-spondent in evaluating the appropriateness and adequacy of a
response In contrast to ability tests, there are no obvious right
answers The lack of information in the stimulus situation
interacts with the free-response format to impede attempts at
self-evaluation of the appropriateness of the response Thus,
decision making and processing in the face of minimal external
guidance with concomitant insecurity is also a major nent of the response process and projective test task In otherwords, coping with insecurity and uncertainty without suffi-cient information about the adequacy of one’s response is part
compo-of the response process
Response Characteristics
With self-report tests, the interpretive dimensions (e.g.,
depression for Scale 2 of the MMPI) are predetermined In trast with projective tests, interpretive dimensions are implicit
con-in the test behavior The con-interpreter observes the respondent’sbehavioral patterns in order to construct the dimensions to bedescribed For example, implicit motives organize pictures intostories (McClelland, Koestner, & Weinberger, 1989), and theinterpreter describes these dimensions within the interpreta-tion As noted earlier in this chapter, a crucial aspect of theprojective test stimulus situation is the lack of informationregarding the adequacy of the response As suggested byCampbell, projective tests are “typically open-ended, free, un-structured, and have the virtue of allowing the respondent toproject his own organization on the material” (1957, p 208) Inother words, it is the respondent who accounts for a great ma-jority of the variation in the test responses in terms of their self-expressive and organizational components (Viglione & Perry,1991) The fact that the response is wholly formed and created
by the respondent has been referred to by Beck (1960) as thegold of the Rorschach
Compared to report tests, the fixed test stimuli in report tests and limited response options themselves accountfor a much greater part of the variation among test responses
self-or behaviself-ors Test developers predetermine structured testbehaviors and, as a result, limit the freedom of response In
other words, there is much less variation in true versus false
than there is in TAT responses or earlier memories cally, this fixed item and response format was typical of thepersonality and attitude measurement devices that domi-nated during the mental testing period from 1920 to 1935,and against which projective testers rebelled On the otherhand, free responses are not essential for a test to be projec-tive because multiple-choice or rating-scale response formatshave been used (Campbell, 1957) Nevertheless, the domi-nant projective tests in clinical practice use a free-responseformat Multiple-choice and rating-scale formats have beenprimarily used for research on test validity and the responseprocess (e.g., Exner et al., 1978)
Histori-Within the free-response format the respondent creates ororganizes a response and expresses him- or herself throughthe content of the response The response content is neither
Trang 33preselected nor prestructured by the test developer, but is
an expression of the given individual in the context of the
exam In an article introducing a conceptual model for
psy-chopathology and the Rorschach, Viglione and Perry (1991)
couched this in terms of the limited environmental influence
on Rorschach responses This argument can be extended, in
some degree, to all projective testing As described in this
article, projective test behaviors are largely influenced by
the internal world rather than by the test environment and
stimuli The content, structure, and adequacy (and the
evalu-ation of that adequacy) of the response come from the
indi-vidual The interpretive system accompanying the projective
test is an aid in directly learning about the individual through
analyzing the self-expressive and organizational aspects of
these behavioral productions
The free-response format maximizes the expression of
individual variance The population of possible answers is
un-bounded in free-response tasks, so that the response itself can
capture much more individual variation than can an item in a
self-report personality test In this way projective tests
maxi-mize salience and relevance of the response to the individual,
a characteristic that has been referred to as the idiographic
focus of projective testing Indeed, the complexity and variety
of these responses have made it difficult to create
comprehen-sive scoring systems From a psychometric perspective, this
complexity and variety may translate to less reliability and
more interpreter bias but, nevertheless, more validity
Interpretive Implications
What has been called expressive style is an example of the
or-ganizational component of a projective test response (Bellak,
1944) The free-response component of the projective test
stimulus situation allows expressive style to emerge It can
be characterized by the following questions: “Does he talk
very fast or stammer badly? Is he verbose or terse? Does he
respond quickly or slowly ” (Murstein, 1963, p 3)
Ex-pressive style is also captured in nonverbal ways, which are
important to understanding an individual’s functioning and
interpersonal relationships Does the respondent use space in
drawing and sentence completion blanks neatly? Is the
re-spondent overly concerned with wasting space and time, or
sure to involve elaborated and elegant use of symbolic flair in
his or her presentations? Indeed, the nonverbal mode of
func-tioning and being in the world is accessed by the projective
tests In support of this importance of nonverbal functioning,
neuropsychological research would suggest that aspects of
interpersonal and emotional functioning are differentially
re-lated to visual-spatial, kinesthetic, and tactile modes in
com-parison to verbal modes Future research might attempt to
investigate the relative contributions of expressive style andnonverbal modes to validity and utility
The multimodal characteristic of the projective test sponse greatly multiplies its informational value For exam-ple, a behavioral observation of (a) tearfulness at a particularpoint in an early memory procedure, (b) a man’s self-criticalhumor during a TAT response that describes stereotypic malebehavior, (c) fits and starts in telling a story with sexual con-tent, (d) a seemingly sadistic chuckle with “a pelt, it’s roadkill” Rorschach response, (e) rubbing a Rorschach plate toproduce a response, or (f) a lack of positive, playful affectthroughout an early memory testing are all critical empiricaldata subject to interpretation Such test behaviors can lead toimportant hypotheses and allow one to synthesize variouscomponents of the test results by placing them in the context
re-of the individual’s life These insights are not readily able or subject to systematic observation through othermeans in an assessment session These are examples of thefundamental purpose of projective tests: to gather an other-wise unavailable sample of behavior to illuminate referralissues and questions emerging during the exam
avail-In addition, projective tests allow a rare opportunity to serve idiographic issues interacting with the instrumental di-
ob-mension of behavior Levy (1963) defined the instrumental
dimension of behavior as the adequacy or effectiveness of the
response in reaching some goal In cognitive ability testingthis dimension could be simplified to whether a response isright or wrong Like respondents on ability, cognitive, orneuropsychological tests, projective test respondents perform
a task To varying degrees, all projective test responses can beevaluated along a number of instrumental dimensions includ-ing accuracy, synthesis, meaningfulness, relevance, consis-tency, conciseness, and communicability For example, theinstrumental dimension relates to the quality, organization,and understandability of a TAT story or early memory as ex-plained to the examiner In ability tests, we concern ourselvesmostly with the adequacy of the respondent’s outcome, an-swer, or product In contrast, in projective tests we are con-cerned with not only the adequacy of the outcome, but alsothe process and behavior involved in producing the outcome
In our nomenclature, projective tests allow one to observe theinteraction between the self-expressive and instrumentalcomponents of behavior—in other words, how adequate a re-sponse is in light of how one solves a problem Extending thisinteraction, projective test behavior also allows the examiner
to observe the impact of emotional and interpersonal sures on the adequacy and approach to solving problems.This is a crucial contribution of projective tests to assess-ment, providing an interpretive link between findings fromself-report tests and ability tests