O2709c01.pmd

Psychotherapy Research 12(1) 1–21, 2002 2002 Society for Psychotherapy Research HERMENEUTIC SINGLE-CASE EFFICACY DESIGN
In this article, I outline hermeneutic single-case efficacy design (HSCED),an interpretive approach to evaluating treatment causality in single therapycases. This approach uses a mixture of quantitative and qualitative methodsto create a network of evidence that first identifies direct demonstrationsof causal links between therapy process and outcome and then evaluatesplausible nontherapy explanations for apparent change in therapy. I illus-trate the method with data from a depressed client who presented withunresolved loss and anger issues.
All Gaul, wrote Julius Caesar (51 BCE/1960), is divided into three parts. Similarly,psychotherapy research can be organized into three main areas. Unlike ancient Gaul,these domains are defined not by the rivers that separate them but rather by thescientific questions that motivate them and by the language, customs, and principlesof the researchers who seek to answer these questions.
These three questions and the research territories they define are as follows: (a) Has this client (or group of clients) actually changed? ( psychotherapy outcome re-search; e.g., Strupp, Horowitz, & Lambert, 1997); (b) is psychotherapy generallyresponsible for change? ( psychotherapy efficacy and effectiveness research; e.g., Haaga& Stiles, 2000); and (c) what specific factors (within therapy or outside it) are re-sponsible for change? ( psychotherapy change process research; e.g., Greenberg, 1986).
In this article, I focus on the second question: the causal efficacy or effective- ness of psychotherapy. However, tackling this question requires answering both thefirst question (whether there is any actual change) and the last question (what pro-cesses mediate change). Furthermore, I attempt to meet the challenge of answeringthese three questions for single therapy clients and nonbehavioral therapies by pro-posing the hermeneutic single-case efficacy design (HSCED).1 1In this, I had hoped to be aided by the scientific incarnation of Asterix and Obelix, those two intrepid(but fictional) adventurers who, according to the graphic novels of Goscinny and Uderzo (1961), soplagued Julius Caesar in his attempts to conquer Gaul. Unfortunately, copyright restrictions severelylimit the use of these characters, and especially modified forms of their names (e.g., “hermeneutix” and“critico-reflectix”).
This article is based on the Presidential Address delivered at the June 2001 meeting of the Society forPsychotherapy Research in Montevideo, Uruguay. I gratefully acknowledge the inspiration of ArtBohart, on whose initial work the method described here is based, as well as the contributions ofDenise Defey, Constance Fischer, David Rennie, Kirk Schneider, and my students, Mona Amer, RobDobrenski, Helena Jersak, Cristina Magaña, Rhea Partyka, Suzanne Smith, John Wagner, and espe-cially Michelle Cutler.
Correspondence concerning this article should be addressed to Robert Elliott, Department of Psy- chology, University of Toledo, Toledo, OH 43606. E-mail: Robert.Elliott@utoledo.edu.
The Need for a Critical-Interpretive Approach
to Causal Research Design
The standard tool for addressing the efficacy of psychotherapy, the randomized clinical trial (RCT) design, is an extremely blunt instrument that suffers from a hostof scientific difficulties (see Cook & Campbell, 1979; Haaga & Stiles, 2000; Kazdin,1998), especially poor statistical power, differential attrition, and poor generalizabilityas a result of restricted samples.
Causal Emptiness
Not the least of these difficulties are two related problems: First, RCTs rely on a stripped-down operational definition of causality (from J. S. Mill; see Cook & Campbell,1979), in which inferring a causal relationship requires establishing (a) temporalprecedence (priorness); and (b) necessity and sufficiency (that cause and effectcovary). Thus, RCTs are “causally empty,” offering conditions under which inferencescan be reasonably made but providing no method for truly understanding the spe-cific nature of the causal relationship. For this reason, Haynes and O’Brien (2000)and others have argued that inferring a causal relation requires another condition:the provision of a plausible account (“logical mechanism“) for the possible causalrelation. Unfortunately, RCTs provide no built-in method for establishing or identify-ing such plausible causal processes.2 Poor Generalizability to Single Cases
The second problem is that RCTs do not warrant causal inferences about single cases. Even when a therapy has been shown to be responsible for change in gen-eral, for any specific client, factors other than therapy may actually have been thesource of the observed or reported changes, or the client’s apparent change mayhave been illusory. The existence of this inference gap argues for moving the locusof causal inference from the group to the single case, where each client’s distinctivechange process can be traced and understood.
Rescuing the N = 1 Design
The traditionally sanctioned alternative to group experimental design has been single-participant experimental design (Kazdin, 1998). The logic and potential clini-cal utility of these designs is compelling (Sidman, 1960), and advocates have longargued for the applicability of these designs to nonbehavioral treatments (Morgan &Morgan, 2001; Peterson, 1968). Nevertheless, the logic of these designs depends onbehavioral assumptions about the change process, especially the situational speci-ficity of behavior and the foundational role of functional analysis in treatment. As aresult, these designs have never caught on outside traditional behavior therapy, noteven for cognitive–behavioral therapies.
To address the difficulties of applying single-case design to nonbehavioral thera- pies, methodologists such as Kazdin (1981) and Hayes, Barlow, and Nelson-Gray(1999) have proposed more flexible alternatives that “stretch” the guidelines of stan- 2This is not to say that “enriched” RCTs (Piper, 2001) cannot be used to identify specific causal process;rather, that carrying out the essential elements of an RCT does not provide this understanding.
dard single-case design, in particular, the clinical replication series. These authorshave proposed the following characteristics of single-case research as useful for in-creasing internal validity (Kazdin, 1981): 1. Systematic, quantitative data (vs. anecdotal)2. Multiple assessments of change over time3. Multiple cases (a form of multiple baseline design)4. Change in previously chronic or stable problems5. Immediate or marked effects after the intervention Note that the first three features are design strategies over which the researcher hassome control, whereas the last two (stability and discontinuous change) are casespecific and emergent.
Sources of HSCED
Kazdin’s (1981) general guidelines provided me with one of the sources for HSCED. Another source was Cook and Campbell’s (1979) brief description of the“modus operandi” (i.e., one-group post-only design), which they argued can be in-terpreted when there is rich contextual information and what they called “signedcauses” (i.e., influences whose presence is evident in their effects). Mohr (1993) wenteven further, arguing that the single case is the best situation for inferring and gen-eralizing causal influences, which are obscured in group designs.
The final and most important source for HSCED was Bohart and Boyd’s (1997) description of an interpretive approach to examining client qualitative accounts ofchange over therapy. Starting from a client’s assertion that she has changed and herclaim that this is the result of therapy, Bohart and Boyd ask, “What would it take tomake a convincing case that therapy caused a reported change?” In general, the answerto this question takes the form of two types of information: (a) other evidence thatthe change occurred (corroboration); and (b) plausible ruling out of alternative pos-sible sources of the change.
A rich case record of comprehensive information on therapy process and out- come (e.g., using multiple perspectives, sources, and types of data) provides a use-ful starting point. However, critical reflection on the claim of therapy-caused changeis also required through maintaining awareness of one’s personal expectations andtheoretical presuppositions while systematically searching for evidence that casts doubton one’s preferred account. To do this, Bohart and Boyd (1997) proposed a set ofplausibility criteria for evaluating client causal accounts, including evidence for ground-ing in the client’s experience, deviation from expectations, elaboration, discrimina-tion between positive and negative effects and processes, idiosyncraticness, andcoherence.
Essentials of HSCED
In our society, experts make systematic use of practical reasoning systems to make various important judgments, including legal rulings and medical decisions.
HSCED is proposed as such a practical reasoning system, with the specific purposeof evaluating the causal role of therapy in bringing about outcome. It builds on Bohart and Boyd’s (1997) approach but examines a larger set of alternative nontherapyexplanations, makes greater use of quantitative outcome and weekly change data,and devotes more attention to systematically determining whether change hasoccurred.
To illustrate HSCED, I use a running case example: a depressed, 49-year-old European-American man whom I refer to as Paul. This client’s main presenting prob-lems were financial worries, general negativity and cynicism, problems communi-cating with his son (with whom he felt identified and who had become clinicallydepressed also), and, most importantly, unresolved issues from a rapid successionof deaths in his family (mother, father, brother) 10 years previously. He was diag-nosed with bipolar II disorder (major depressive episodes plus hypomania) and wasseen at the Center for the Study of Experiential Therapy for 39 sessions of process-experiential therapy, primarily focusing on issues of anger and loss. He was seen bya second-year clinical psychology graduate student over the course of 16 months. Idid the research interviews.
Rich Case Record
The first prerequisite for a HSCED is a rich, comprehensive collection of infor- mation about a client’s therapy. This includes background information as well as dataon therapy process and outcome, using multiple sources or measures. I have foundthe following data to be useful: 1. Basic facts about client and therapist. These include demographic informa- tion, diagnoses, presenting problems, therapeutic approach or orientation (e.g.,that given previously for Paul).
2. Quantitative outcome measures. Therapy outcome has both descriptive quali- tative (how the client changed) and quantitative (how much the client changed)aspects. For Paul, quantitative measures included standard self-report ques-tionnaires such as the Symptom Checklist-90 (SCL-90-R; Derogatis, 1983),Inventory of Interpersonal Problems (IIP; Horowitz, Rosenberg, Baer, Ureño,& Villaseñor, 1988), and Simplified Personal Questionnaire (PQ; Elliott, Shapiro,& Mack, 1999). At a minimum, these measures should be given at the begin-ning and end of therapy, but it is also a good idea to give them periodicallyduring therapy, every 8 to 10 sessions. Paul’s quantitative outcome data aregiven in Table 1.
3. Change Interview (Elliott, Slatick, & Urman, 2001). This semistructured in- terview provides (a) qualitative outcome data in the form of client descrip-tions of changes experienced over the course of therapy; and (b) client de-scriptions of their attributions for these changes, including helpful aspects oftheir therapy. (Information on negative aspects of therapy and on medica-tions is also collected.) The Change Interview takes 30 to 45 minutes and canbe performed by a third party every 8 to 10 sessions, at the end of therapy,and at follow-up.
I asked Paul to tell me what changes he had noticed in himself since therapy started. He listed six pre-to-post changes, including “more calm in the face ofchallenges,” “giving myself more credit for accomplishments,” “doing betterfinancially,” “being a happier person,” “being more hopeful about my life” and“[I] don’t feel young anymore” (a negative change). I then asked Paul to rate TABLE 1. Outcome Data for Client PE-04 (Paul)
Note. Caseness = cutoff for determining whether client is clinically distressed; RC min = minimum valuerequired for reliable change at p < .2; SCL-90-R GSI = Global Severity Index of the Symptom Checklist-90-Revised; IIP-26 = Inventory of Interpersonal Problems-26. Values in parentheses use the median ofthe first three and last three PQ scores. Sources for values given: Barkham et al. (1996; Inventory ofInterpersonal Problems, Personal Questionnaire); Ogles, Lambert, & Sawyer (1995; SCL-90-R GSI).
aReliable improvement from pretherapy.
*p < .2.
each change on three rating scales, among them the attributional question, “Howlikely do you think this change would have been without therapy?” Paul sub-stantiated his strong therapy attribution with his descriptions of the various waysin which he understood his therapy to have brought about the changes, includ-ing this summary of his change process: “I don’t think I would have looked atthose [feelings] on my own. And obviously there were a lot of those that I didn’tlook at on my own . . . I think the therapy actually in some way . . . gave mea process of grieving, maybe not all the stages of grief, but some.” 4. Weekly outcome measure. A key element in HSCED is the administration of a weekly measure of the client’s main therapy-related problems or goals. Weused the Simplified PQ (Elliott et al., 1999), an individualized target complaintmeasure consisting of roughly 10 seven-point distress rating scales. Paul’sweekly mean PQ scores are given in Figure 1, which reveals numerous sta-tistically reliable (>.53) week-to-week shifts in PQ scores.
5. Helpful Aspects of Therapy (HAT) Form (Llewelyn, 1988). This is a frequently used qualitative measure of client perceptions of significant therapy events.
This open-ended seven-item questionnaire is administered to clients aftertherapy sessions. In HSCED, HAT data are used to pinpoint significant thera-peutic processes that may be associated with change on the weekly outcomemeasure or to corroborate change processes referred to in the Change Inter-view. In his HAT descriptions, Paul rated 12 significant events with scores of8 (“greatly helpful”) or higher. These descriptions provide a summary narra-tive of what the client considered at the time to be the most helpful events inhis therapy. Paul gave two events ratings of 8.5, one in Session 10 (“the needto work on resolving my anger toward my father; the realization that the angerI have carried around might be directed at him”) and the other in Session 36(“realizing the anger and/or distrust of my wife; I believe I have been sup-pressing this”).
6. Records of therapy sessions. Therapist process notes and videotapes of therapy sessions are collected in case they are needed to pinpoint, corroborate, orclarify issues or contradictions elsewhere in the data. For example, to makesense out of the largest shifts in Paul’s weekly PQ scores, I used his therapist’sprocess notes.
FIGURE 1. Personality Questionnaire means across sessions: PE-04.
Direct Evidence: Clear Links Between Therapy Process and Outcome
In HSCED, the starting point is direct evidence pointing to therapy as a major cause of client change. To be confident about proceeding further with the analysis,it is best to have at least two separate pieces of evidence supporting the therapy–change link.
1. Retrospective attribution. First, the client may attribute a reported change to therapy without specifying the nature of the felt connection. Clear supportfor the therapy efficacy hypothesis can be found in Paul’s “likelihood withouttherapy” ratings and his description of the role therapy played in helping himfeel more calm in the face of challenges.
2. Process–outcome mapping. The content of the client’s posttherapy changes correspond to specific events, aspects, or processes within therapy. For ex-ample, 5 of Paul’s 12 high-rated significant events (e.g., Session 12: “feelingthe hurt, fear, and sadness related to the loss of my family. It enabled me torealize that I can feel what might be under my anger”) refer to work on un-resolved loss/grief issues regarding his family of origin, his major posttherapychange.
3. Within-therapy process–outcome correlation. In addition, theoretically cen- tral in-therapy process variables (e.g., adherence to treatment principles) maybe found to covary with week-to-week shifts in client problems. To examinethis possibility for Paul’s therapy, I correlated his therapist’s postsession rat-ings of her use of process-experiential treatment principles, tasks, and responsemodes with difference scores on the PQ (n = 34 pairs of data points). Only 2of the 63 correlations were statistically significant (p < .05), less than wouldbe expected by chance. Therefore, at least on this basis, I found no evidenceof a therapy-change link.
4. Early change in stable problems. Therapeutic influence can be inferred when therapy coincides with change in long-standing or chronic client problems, contrasting with an explicit or implicit baseline. Paul’s mean PQ scores (seeTable 1) do appear to show a reliable, two-point drop from pre- to posttreat-ment. Although we do not know how long Paul’s problems had continued atroughly the same level, it is clear that some of them were of many years’duration. Furthermore, his two pretreatment PQ mean scores are consistentwith each other (4.44 and 4.11) and in the clinical range (i.e., well above thecutoff of 3). It is true that his weekly PQs (see Figure 1) show some instabil-ity, but this appears to be a consequence of three “outlier” sessions (4, 24,39). If these are ignored, the largest improvement occurs after Session 1, movingthe client into the nonclinical range.
5. Event-shift sequences. An important therapy event may immediately precede a stable shift in client problems, particularly if the nature of the therapy pro-cess and the change are logically related to one another (e.g., therapeuticexploration of an issue followed the next week by change on that issue).
Although Paul’s PQ ratings contained many substantial shifts (see Figure 1),the largest shifts appeared to reflect temporary changes associated with thethree outlier sessions. However, the weak evidence for event-shift sequencesin Paul’s therapy was weak at best, because negative shifts followed signifi-cant helpful events almost as often as positive shifts did.
In this evaluation of possible direct evidence for the efficacy of Paul’s therapy, I found supportive evidence on three of five possible indicators, enough to corrobo-rate his claim in the Change Interview.
Indirect Evidence: Competing Explanations
for Apparent Client Change
HSCED also requires a good-faith effort to find nontherapy processes that could account for an observed or reported client change. Table 2 summarizes each of eightnontherapy explanations for apparent client change, along with methods for evalu-ating its presence. The practical reasoning process involved in evaluating these al-ternatives is like detective work, with contradictory evidence sought and availableevidence weighed carefully. As a result, some nontherapy explanations may be ruledout entirely, whereas others may be found to partially or even completely explainthe observed change. In addition, it is important to weigh both positive and negativeevidence. Discrepancies point to complexities or restrictions on the scope of changeor the possible role of therapy.
A further consideration is the degree of uncertainty considered tolerable. The circumstances under which therapists and their clients operate preclude near cer-tainty (p < .05). “Reasonable assurance” or “beyond a reasonable doubt” (p < .2) issuggested as a more realistic and useful standard of proof.
Trivial or Negative Change
The first four nontherapy explanations assume that apparent client change is illusory or artifactual. To begin with, the apparent changes may be negative or trivial.
Trivial change. On the one hand, a client might describe a change in such highly qualified or ambivalent terms as to cast doubt on its importance (“I think maybe I’m TABLE 2. Indirect Evidence: Methods for Evaluating the Presence
of Nontherapy Explanations
a. Apparent changes are trivial.
Calculate reliability and clinical significance on outcomemeasures.
Look at client manner and detail for indications of importance.
Ask about importance of changes.
b. Apparent changes are negative.
Ask client, therapist about negative changes.
Test for reliable negative change on outcome measures.
a. Apparent changes reflect measurement error.
Assess for reliable change (RCI calculations).
b. Apparent change reflects outlier or regression Assess duration/stability of client problems.
c. Apparent change is due to experimentwise Replicate change across multiple measures (global reliable 3. Relational artifacts: Apparent changes are Look for specific or idiosyncratic detail.
superficial attempts to please therapist/researcher.
Ask client about negative and positive descriptions of therapy.
Assess client tendency to respond in socially desirable manner.
4. Apparent changes are result of client expectations Evaluate stereotyped vs. idiosyncratic nature of language (therapy “scripts”) or wishful thinking.
used to describe changes.
Ask client whether changes were expected vs. surprising.
5. Self-correction: Apparent changes reflect self-help Assess duration/stability of client problems (by multiple and self-limiting easing of short-term or temporary Assess client-perceived likelihood of changes without therapy.
Look for evidence of self-help efforts begun before therapy.
6. Apparent changes can be attributed to extratherapy Look for extratherapy events that might have life events (e.g., changes in relationships or work).
influenced changes.
Assess client-perceived likelihood of changes without therapy.
Consider mutual influence of therapy and life events on oneanother.
7. Psychobiological factors: Apparent changes can be Collect information on changes in medication or herbal attributed to medication or herbal remedies or Consider role of recovery from illness as possible cause.
8. Apparent changes can be attributed to reactive Ask client about effects of research.
effects of research, including relation with research Use nonrecruited clients, unobtrusive data collection.
beginning to see that change might be possible”; “Um, I guess feeling better aboutmyself, maybe just a little bit”). In addition, clients sometimes describe changes inother people (“My husband has finally started fixing the house”) or in their life cir-cumstances (one of Paul’s changes was “doing better financially”). In the same way,changes on quantitative outcome measures may also fall into the trivial range (e.g.,one point on the Beck Depression Inventory).
Negative change. Alternatively, changes might be negative, casting doubt on the overall effectiveness of the therapy. For example, at an earlier assessment, Paul notedthat he and his son were now fighting more than when he began therapy. Given theimportance of this issue for him, this negative change could be taken as evidence ofclinical deterioration, which negated or at least compromised positive changes. Simi-larly, changes on quantitative outcome measures may also occur in the negativedirection (see Ogles, Lambert, & Sawyer, 1995).
Strategies for dealing with trivial or negative change. It is useful to define inter- vals or threshold values that can be used to define change as nontrivial. Jacobsonand Truax (1989) proposed two criteria for evaluating change: (a) statistically reli-able change and (b) movement past clinical caseness (i.e., clinical significance) cut-offs. Table 1 includes these criteria for three key measures (SCL-90-R, IIP, and PQvalues taken from Barkham et al., 1996; Ogles et al., 1995).
Paul’s outcome data indicate that from pre- to posttherapy he moved past the caseness threshold on two of the three measures (SCL-90 and PQ) but that the amountof change was reliable on only one of them (PQ).
Next, to assess for negative changes, the researcher can ask the client to de- scribe any negative changes that might have occurred over the course of therapy.
For example, at posttreatment, when Paul was asked about negative changes, henoted that he did not feel young anymore.
Finally, clients can be asked to evaluate the importance of changes, perhaps using rating scales (cf. Kazdin, 1999). In the Change Interview, the client rates the impor-tance of each change, using a five-point scale. In addition, the manner of the client’sdescription can be examined for qualifiers and other forms of ambivalence. Thus, inhis posttherapy Change Interview, Paul rated all of his positive changes as either“very” or “extremely” important. By contrast, he rated his one negative change, feel-ing old, as “slightly” important. His descriptions of changes were directly stated withoutqualifiers and even included occasional intensifiers (“I am in fact more calm thanI’ve been in a long time”).
Although examining whether changes are trivial or negative is fairly straightfor- ward, it is important to consider both positive and negative evidence and to interro-gate discrepancies among measures and types of evidence.
Conclusions. On the one hand, Paul’s Change Interview data clearly support the view that he improved in several important ways. On the other hand, of his quan-titative outcome measures, only the PQ shows clear improvement at posttreatment.
Thus, the evidence for substantial, positive change is mixed.
Statistical Artifacts
Related to the possibility of trivial change is statistical error, including measure- ment error, regression to the mean, and experimentwise error.
Measurement error. This involves random inconsistency on quantitative mea- sures, stemming from inattention in completing forms, ambiguous wording of items,misunderstanding of the meaning of items, and rating tasks that exceed the ability ofraters to accurately characterize their experiences. The standardized error of the dif-ference (Sdiff) provides an appropriate estimate of error in measuring client change(Jacobson & Truax, 1991): where rxx is test–retest reliability and s1 is the standard deviation of a comparablenormative population.
This formula allows one to establish a confidence interval for defining a mini- mum reliable change index (RCI) value for client change at either the traditional 95%level (1.96 Sdiff), as proposed by Jacobson and Truax (1991), or the 80% level (1.29Sdiff) proposed here. Client change that is less than the minimum RCI value is judgedto reflect measurement error. Table 1 contains RCI minimum values for three com-mon outcome measures: SCL-90, IIP, and PQ.
Paul’s pre-to-post change on two of these three measures is less than the pre- scribed values, although his change on the PQ is statistically reliable and would easilysurvive a Bonferroni correction. However, the frequent drastic shifts in Paul’s weeklyPQ scores raised issues about greater-than-expected temporal instability (consistentwith his atypical bipolar diagnosis) and suggested that it would be a good idea touse the median of his first three and last three PQ scores. The difference betweenthese pre- and posttherapy median PQ scores greatly exceeded the minimum RCIvalue of .53.
Regression to the mean by outliers. Regression to the mean occurs when mea- surements with less-than-perfect reliability are selected on the basis of their extremevalues. This introduces bias that is not present when the measurement is later re-peated, resulting in the second measurement taking a less extreme value, thus pro-ducing illusory change. For example, one possible explanation for the numerous sharpspikes in Figure 1 is measurement error followed by regression to the mean. (Bipo-lar cycling between depressive and hypomanic states is another explanation, giventhe client’s diagnosis.) If regression to the mean is operating, then repeating the measurement before beginning therapy is likely to reveal a sharp drop; if this occurs, the second mea-surement can be used as the pretest. If scores are consistent or increase across thetwo pretests, they can be combined (e.g., using the median of the first three scoresand last three scores of a weekly change measure). A more qualitative approach toassessing regression to the mean is to perform careful pretreatment assessment todetermine the duration of the client’s problems.
As Figure 1 indicates, Paul’s two pretreatment PQ scores were fairly stable (both above 4 or “moderately distressed”), indicating that they were representative of hisusual responses. Thus, the substantial changes observed in Paul’s PQ scores are prob-ably not a function of regression to the mean. Unfortunately, we did not obtain multiplepretests on the SCL-90 or IIP and do not have systematic data on problem duration.
Experimentwise error. This is a function of carrying out multiple significance tests on change measures. When examining many measures for evidence of change, someapparently reliable differences may occur as a result of chance alone. For example, when three measures are used to evaluate the reliability of pre-post change, with therelaxed standard proposed, each measure has .2 probability of indicating change whennone existed (Type I error). Compounded across three measures, the probability ofone or more measures of three showing reliable change by chance is .49. The solutionhere is to require reliable change on two of three measures (this corresponds to aprobability of .10) or on one measure at a more conservative probability level, such asp < .05. Requiring replication of reliable change across different outcome measuresallows us to designate a client as demonstrating “global reliable change.” Using these criteria, Paul showed reliable change at posttreatment on only one of the three measures, thus failing to demonstrate global reliable change. However,he did satisfy this standard at 6-month follow-up (see last two columns in Table 1),and his posttherapy PQ change exceeded the p < .05 significance level.
Conclusions. Regarding statistical artifacts, Paul’s results are mixed. Regression to the mean is unlikely to have accounted for pre-post change on the PQ. However,the data do not support global reliable improvement at posttherapy, although theydo support it at 6-month follow-up. At posttherapy, it would be most accurate to saythat Paul shows reliable but limited change.
Relational Artifacts
Apparent client improvement may also reflect interpersonal dynamics between client and therapist or researcher, in particular attempts to please the latter. The clas-sic relational artifact is the legendary (but impossible to attribute) “hello–goodbye”effect, in which the client enters therapy emphasizing distress to impress the researchstaff to accept him or her. However, at the other end of therapy, the client empha-sizes positive functioning either to express gratitude to the therapist and researchstaff or to justify ending therapy. I suspect that the use of fixed time limits in mosttherapy research works to strengthen this effect: If therapy is going to end anyway,there is little to be gained by trying to look worse than one is, and one might as wellmake the best of it! Evaluating the plausibility of reported therapy attributions. To determine the role of self-presentational interpersonal artifacts, client narrative descriptions are invaluable.
These accounts are probably most credible when they emerge spontaneously in therapysessions or research interviews; however, researchers may prefer to obtain these ac-counts systematically via questionnaire or interview. Because interviews are a highlyreactive form of data collection, client quantitative accounts of the effects of therapyneed to be read very carefully for their nuance and style. Here is where several ofBohart and Boyd’s (1997) plausibility criteria come into play, specifically, elaborationand discrimination. In particular, a credible client account of therapy’s influence is elabo-rated: It contains specific details about what has changed and how the change cameabout; general descriptions are backed up by supportive detail. In addition, there is amixture of positive, negative, and neutral descriptions (differentiation). On the otherhand, highly tentative or overly positive descriptions of the therapy as well as positivereports that lack detail or cannot be elaborated even under questioning are likely toindicate interpersonally driven self-report artifacts.
Interview strategy. The validity of client accounts is also enhanced if a researcher (rather than the therapist) interviews the client and if the researcher conducts an extended, in-depth interview in which he or she encourages thoughtful self-reflectionand openness on the part of the client.
Measuring relational response tendencies. A final strategy for dealing with rela- tional artifacts is to use quantitative outcome measures (e.g., Tennessee Self-Con-cept Scale; Fitts & Warren, 1996) that contain indexes of the client’s tendency to presentin ways that emphasize or downplay problems.
Although I did not give Paul a social desirability or other quantitative validity scale, his Change Interview data contained substantial detail and at least some nega-tive descriptions. Nevertheless, his manner and choice of language suggested thathe may have deferred to me as an apparently successful authority figure of roughlythe same age. This raises the possibility that he may have held back negative viewsof his therapy to avoid the possibility of offending me. This is one possible explana-tion for the discrepancy between his quantitative outcome measures and his verypositive descriptions in the Change Interview. Because I was aware of the possibil-ity of his trying to please me, I tried to communicate the attitude that his criticalcomments would be especially appreciated because they would help improve thetherapy.
Conclusions. Overall, the detailed, differentiated nature of the qualitative data make it unlikely that relational artifacts are enough to explain the positive changesPaul described.
Expectancy Artifacts
Cultural or personal expectations (“scripts”) or wishful thinking may give rise to apparent client change. That is, clients may convince themselves and others thatbecause they have been through therapy they must, therefore, have changed. Weexpect posttherapy accounts to be particularly vulnerable to this sort of retrospec-tive expectancy bias. However, longitudinal measurement of change is no guaran-tee against clients expecting themselves to do better at the end of therapy and, there-fore, giving themselves the benefit of the doubt when recalling, integrating, and ratingsubtle or ambiguous phenomena such as mood symptoms, relationships, and self-evaluations.
Fortunately, the distinction between expectations and experience can be made partly by examining the language clients use to describe their experience. This isbecause expectation-driven descriptions must rely on shared cultural schemas aboutthe effects of therapy; therefore, such “scripted” descriptions will typically make useof standard or clichéd phrases, such as “someone to talk to” or “insight into my prob-lems.” (See Elliott & James, 1989, for a review.) Client accounts of changes that con-form entirely to cultural stereotypes are less credible than those that reflect moreunusual experiences. By contrast, descriptions that are idiosyncratic in their contentor word choice are more believable. In addition, expectation-driven expressionstypically sound vague, intellectualized, or distant from the client’s experience. Thisis quite different from descriptions that are delivered in a detailed, careful, and self-reflective manner that indicates their grounding in the client’s immediate experience(cf. Bohart & Boyd, 1997). For example, Paul’s descriptions generally contained amixture of stock elements (the idea that releasing blocked feelings is therapeutic)but often qualified in idiosyncratic ways (e.g., typifying this release as a gradual processoccurring over the course of a year). Some of Paul’s descriptions of his change pro- cess did have an intellectualized, self-persuasive quality (e.g., “I think I could seethe progress, and that can only help build self-esteem and self-confidence. So as thatgoes up, maybe proportionately, maybe the anxiety goes down” [italics added].) Facedwith this self-speculative account, I asked Paul to check the accuracy of his descrip-tion: “Is that what it feels like, that somehow you have this sense of your own hav-ing made progress, and that somehow makes the anxiety less?” Probes such as thisenabled him to elaborate a more experientially based account of extended, painfulgrieving for deceased family members.
In addition, if a client reports being surprised by a change, it is unlikely to re- flect generalized expectancies or stereotyped scripts for therapy. Researchers candetermine this more systematically by asking clients to rate the degree to which theyexpected reported changes. For example, on four of his six changes, Paul rated him-self as “somewhat surprised.” Paul’s descriptions provide some evidence for the influence of therapy “scripts.” However, I believe that the weight of the evidence points clearly toward a view ofhis descriptions as primarily experience based. In particular, the existence of novelrecasting of stock phrases, his ability to elaborate in experience-near terms, as wellas his claim to have been somewhat surprised by most of the changes he experi-enced all point to this conclusion.
Self-Correction Processes: Self-Help and Self-Generated
Return to Baseline Functioning

The remaining nontherapy explanations assume that change has occurred but that factors other than therapy are responsible. First, client internally generatedmaturational processes or self-help efforts may be entirely responsible for observedchanges. For example, the client may have entered therapy in a temporary state ofdistress that has reverted to normal functioning via the self-limiting nature of tempo-rary crises or the person’s own problem-solving processes. Alternatively, the changecould be a continuation of an ongoing developmental trend. In these instances, cli-ent self-healing activities operate before or independently of therapy.
Direct and indirect self-report strategies. A general strategy for evaluating the final four nontherapy explanations is to ask the client. For example, when Paul wasasked what brought about his changes, the first thing he said was “being honest withmyself, and being open to change, to trying new things.” By itself, this statementwould qualify as a report of self-generated change. However, without prompting,Paul then went on to indicate that this self-generated change process was related totherapy: “Since the therapy, I think I’ve had a lot more courage to really try newthings. It’s been exciting.” Similarly, the client can also be asked to assess how likely he or she feels the change would have been without therapy. For example, Paul rated three of his sixchanges (including the most important one of becoming more calm in the face ofchallenges) as “very unlikely without therapy,” indicating his view that these changesclearly would not have happened without therapy. By contrast, he rated not feelingyoung any more (a negative change) as “somewhat likely” without therapy and theimprovement of his financial situation as “neither likely nor unlikely” without therapy.
Therapist process notes provide an efficient source of information about client self-help efforts and can be used in conjunction with shifts in PQ score. Paul showeda large drop on his PQ after Session 1; in her process notes, Paul’s therapist noted that Paul had recently made the effort to speak to a friend with similar loss issuesand that this conversation had made him feel less alone.
Baseline and multiple pretest strategies. Self-correction, in particular, can also be evaluated by comparing client change to a temporal or expectational baseline. Atemporal baseline requires measuring the duration or stability of the client’s mainproblems or diagnoses. In lieu of repeated pretreatment measurement, cliniciansgenerally measure the baseline of a client’s problem retrospectively simply by ask-ing the client how long he or she has had the problem. This is typically accomplishedin a clinical interview, but it can also be done via a questionnaire or extracted fromtherapy sessions or therapist process notes.
We do not have systematic data from Paul on the duration of his problems; however, a review of session tapes and therapist process notes made it clear thattwo of his main problems—anger/cynicism and unresolved grief—were difficultiesof at least 10 years duration, whereas his financial problems and anxiety about hisson were of relatively recent vintage (i.e., on the order of months). The duration ofhis central problems makes self-correction an unlikely explanation for his changeon the PQ.
It may also prove valuable to listen for client narratives of self-help efforts be- gun before therapy, as when a depressed client applies for therapy services as partof a larger self-help effort that includes joining a health club, starting to take St. John’swort, and making an effort to spend more time with friends. Such a self-generatedprocess is likely to instigate a cascade of nontherapy change processes, includingextratherapy life events and even psychobiological factors.
Conclusions. The long duration of Paul’s problems make it very unlikely that self-correcting processes are primary, independent causal processes. Although Paulrefers to self-correction processes in his interview responses, he emphasizes the causalrole of therapy. On the other hand, the large drop in PQ score after Session 1 comesin conjunction with reported self-help activities and occurs before therapy could bereasonably expected to have an effect. Thus, there is clear support for self-correctionas a partial influence on Paul’s changes, but the evidence indicates that it is unlikelythat self-correction was primarily responsible without themselves reflecting the in-fluence of therapy.
Extratherapy Events
Extratherapy life events include changes in relationships such as deaths, divorces, initiation of new relationships, marriages, births, and other relational crises as wellas the renegotiation of existing relationships. In addition, clients may change jobs,get fired from jobs, get promoted or take on new work responsibilities, change rec-reational activities, and so on. Extratherapy events may be discrete or may involvechronic situations such as an abusive relationship or the consequences of substanceabuse or other problematic behavior patterns. They may also include changes in healthstatus as a result of physical injuries or illnesses or medical treatments, where thesedo not directly impinge on psychological functioning. Further, extratherapy eventscan contribute both positively and negatively to therapy outcome and have the po-tential to obscure the benefits of a successful therapy and to make an unsuccessfultherapy appear effective. Finally, it is important to consider the bidirectional influ-ence of therapy and life events on one another.
The most obvious method for evaluating the causal influence of extratherapy events is to ask the client. In the Change Interview, clients are asked what they thinkbrought about changes. If a client does not volunteer extratherapy events, the inter-viewer inquires about them. In addition, therapist process notes and session record-ings are useful sources of information about extratherapy events because clientsalmost always provide in-session narratives about important positive or negativeextratherapy events. A useful method for locating important extratherapy events isto look at weeks associated with reliable shifts in weekly change measures such asthe PQ. In addition, as noted in the section on self-correction, the Change Interviewasks the client to estimate the likelihood that the change would have occurred with-out therapy.
Extratherapy events are the major nontherapy counterexplanations in Paul’s treat- ment. When Paul was asked to talk about what he thought had brought about hischanges, he spontaneously described the following extratherapy factors: “supportfrom my family . . . reading . . . I have to say my exercise; that’s important . . . newactivities. Mainly the jobs.” His PQ data reveal one large, clinically significant dropat Session 2 and three “spikes,” one each at Sessions 4, 24, and 39. Consistent withthe drop before Session 2, the therapist’s process notes describe the client as feelingbetter, linking this to positive developments in his job and family as well as a discus-sion with a friend with similar problems. On the other hand, extratherapy eventshad a clear negative influence in the weeks before Sessions 4 and 25. In both cases,Paul complained of feeling depressed and angry about problems with his severelydepressed teenage son and reported rebuffs from unsympathetic others (his wife andmental health professionals). There was no clear extratherapy event associated withthe spike before Session 39, his last session, leaving me to speculate that this was aresponse to an intratherapy event: termination.
Paul’s data (including attribution ratings described in the previous section) indi- cate that extratherapy factors played a role in his changes but not to the exclusion oftherapy. Moreover, based on the weekly PQ data, extratherapy events appear to haveplayed more of a negative role than a positive one.
Psychobiological Causes
The next possibility is that credible improvement is present but is due primarily to direct, unidirectional psychophysiological or hormonal processes, including psycho-tropic medications or herbal remedies, the hormonal effects of recovery or stabilizationafter a major medical illness (e.g., stroke) or childbirth, or seasonal and endogenouslydriven mood cycles. This nontherapy explanation is a major issue given that manyclients seeking therapy are currently taking medications for their mood or anxietyproblems. This is a particular problem for psychotherapy research when clients beginor change their medications within a month of beginning psychotherapy or duringthe course of therapy.
Assessing medication. The most obvious approach to evaluating psychobiologi- cal factors is to keep track of medications, including changes and dose adjustments.
It is also important to ask about herbal remedies. (The Change Interview includesquestions about both of these.) Thus, at his posttreatment Change Interview, I learnedthat 1 month before the end of therapy, Paul had increased his dose of citalopramhydrobromide (Celexa) to 20 mg/day (he had been taking it for 6 months, afterswitching from sertraline [Zoloft]). He was also continuing to take clonazepam for anxiety (2 mg/day) and had been doing so for the past 2 years. Thus, Paul had beenstable on his antianxiety medication since well before the beginning of therapy andhad been taking selective serotonin reuptake inhibitors (SSRIs) for almost as long.
Therefore, there appeared to be no connection between changes in his medicationand his weekly PQ ratings.
Using in-session narratives. In addition, client interview data and therapist pro- cess notes provide useful sources of information about medication and the effects ofother medical and biological processes. For example, at his 6-month follow-up in-terview, Paul disclosed that he had suffered from a major, life-threatening illness duringthe intervening time and, as a result, had experienced a greater sense of focus andappreciation for what was important.
Conclusions. The evidence for medication or other biological processes on the level of Paul’s problems was weak at best, at least during the time he was attendingtherapy.
Reactive Effects of Research
The final nontherapy explanation involves the reactive effects of taking part in research. According to this hypothesis, client outcome is affected mostly as a func-tion of being in research. These include reactive research activities (e.g., posttrau-matic stress disorder assessment, tape-assisted recall methods) that enhance (or in-terfere with) therapy, relation with the research staff, which is sometimes better thanwith the therapist, and enhanced sense of altruism, which allows clients to trans-mute their suffering by viewing themselves as helping others. On the other hand,research activities can have negative effects on clients, especially if they are particu-larly evocative or time consuming.
Teasing out the reactive effects of research on client outcome can be difficult, but qualitative interviewing can help here as well if clients are asked about the ef-fects the research has on them. Another possibility is to use nonrecruited clients andunobtrusive data collection. Spontaneous comments during sessions, summarized intherapist process notes, are also worth pursuing. For example, in Session 4, Paulexpressed concerns at not being able to be totally open in therapy because of hisconcerns about the recording equipment. (Several times during therapy, he referredto “all you assholes watching this.”) In addition, he sometimes wrote snide com-ments on his postsession questionnaire. Paul seemed to take being in the researchas more of an inconvenience than a benefit, making it highly unlikely that the re-search was responsible for the changes he reported.
Summary and Conclusions of HSCED Analysis of Paul’s Therapy
Reviewing the results of applying HSCED to Paul’s treatment, there is clear or moderate support for three of five types of direct evidence, retrospective attribution,immediate perception, and change in stable problems. Because the standard is rep-lication across two or more types of direct evidence, this is more than adequate.
In terms of negative evidence, the standard is that no nontherapy explanation can, by itself or in combination with other nontherapy explanations, fully explainthe client’s change, although nontherapy explanations can and usually do play somerole in accounting for change. For Paul, there was clear or moderate support against a primary role for all nontherapy explanations, except experimentwise error. Theanalysis indicates that the change reported on the PQ was unlikely to be due to chancebut identifies Paul’s change as narrowly limited to his presenting problems (indi-cated by lack of change on the SCL-90 and IIP). Self-help, extratherapy events arealso important supporting influences but not to the exclusion of therapy.
Beyond this, however, what have we learned about psychotherapy from this intensive analysis? First, most simplistically, the analysis supports the claim that process-experiential therapy can be effective with clients like Paul (i.e., clients with majordepressive disorder plus hypomania [“bipolar II”]), particularly when they presentwith issues of anger and unresolved grief. Second, although effective, there was stillroom for improvement, especially with regard to a broader range of problems andareas of functioning. Third, the analysis makes it clear that therapy exerted its help-ful effects within a context of other, supporting change processes, especially extra-therapy events and self-help efforts.
Specific Change Processes
Finally, in the process of sorting out the role of therapy in Paul’s change pro- cess, I came across descriptions of what he found helpful in his therapy, descrip-tions with substantial practical utility. Some of these took the form of postsessiondescriptions of significant events, which clearly indicate the central importance ofexploring unresolved feelings of anger toward family members. However, Paul’sdescriptions in his posttherapy Change Interview were more revealing. From exam-ining his discourse, it became clear that Paul did not have a clear “story” about theconnection he felt between his therapy and his key change of feeling more calm inthe face of challenges. Nevertheless, his account provided enough detail about thetherapeutic elements involved and their connections to allow me to construct thefollowing model of his change process: (a) Paul credited his therapist for “bring[ing] me back to certain areas that she thought I needed to work on, which I might have overlooked,” resulting in (b) “aconsistent process of sharing my problems, my frustrations, my heartbreaks,” which(c) “gave me a process of grieving, maybe not all the stages of grief, but some.” Thisgrieving process was one of being “able to gradually release it over a year or how-ever long.” As a result of this, he said, (d) “then you see a tangible result. And evenbefore [my nephew’s] funeral I went out to my family’s graves and I was able tocry.” (e) After this, Paul said, he “start[ed] maybe for the first time in a long time torecognize my progress,” and (f) “that can only help build self-esteem and self-con-fidence.” (g) Finally, Paul implied that this extended grieving/release process hadbegun to undo his earlier problematic functioning (“I kept a lot of things bottled up[before], and I think that just adds pressure, adds to the anger, adds to the anxiety”);leading to (h) reduced anger and anxiety about hurting other people with his anger(“feeling more calm and not blowing challenges out of proportion”).
This rich account highlights the therapist’s main contributions, in the first three steps of the model, as helping the client stay focused on difficult issues; facilitatinggrieving (trauma retelling and empty chair were used for this); and patiently persistingin this process for an extended period of time (39 sessions). The last five steps primar-ily show how the client built on the therapy through his own self-help efforts as theseinteracted with life events such as his nephew’s death. The account also supports theconclusions of the hermeneutic analysis by providing a plausible account of the chainof events from cause (therapy) to effect (outcome) (Haynes & O’Brien, 2000).
Issues in HSCED
To perform an HSCED study, one needs to (a) find an interesting and agreeable client, (b) collect appropriate measures, (c) apply them to construct a rich case record,(d) analyze the information to see whether change occurred, (e) establish whetherdirect evidence linking therapy to client change is present and replicated, (f) ana-lyze the evidence for each of the eight nontherapy explanations, (g) interpret andweigh the various sets of sometimes conflicting information to determine the overallstrength and credibility of each nontherapy explanation, and (h) come to an overallconclusion about the likelihood that therapy was a key influence on client change.
HSCED is a new development and clearly needs further testing and elaboration.
My team and I have applied HSCED to Paul and other clients seen in our researchand training clinic (Elliott et al., 2000; Partyka et al., 2001). What we have learned sofar can be summarized in the following discussion.
First, the question “Did the client improve?” has turned out to be more complex than we first thought. Our clients often present with a mixed picture, showing im-provement on some measures and not others or telling us that they had made greatstrides when the quantitative data contradicted this (see Partyka et al., 2001). It isimportant not to underestimate the complexity of this initial step.
Second, this experience has convinced us that more work is needed on how to integrate contradictory information. We need better strategies for determining wherethe “weight of the evidence” lies (see Schneider, 1999).
Third, we find ourselves in need of additional creative strategies for evaluating nontherapy explanations. For example, to bolster the self-reflective/critical processof examining nontherapy processes, Bohart (2000) proposed a form of HSCED thatrelies on an adjudication process involving separate teams of researchers arguing forand against therapy as a primary influence on client change, with final determina-tion made by a “research jury.” However, a less involved process might simply makeuse of two researchers, one (perhaps the therapist) supporting therapy as an impor-tant influence, the other playing “devil’s advocate” by trying to support alternativeexplanations. The researchers might present both sides, leaving the final decision toa scientific review process (cf. Fishman, 1999). We are currently testing a form ofadjudicated HSCED (Partyka et al., 2001).
Fourth, in comparing HSCED to traditional RCT design, we have found that HSCED requires fewer resources but is in some ways more difficult and demand-ing in that it requires researchers to address complexities, ambiguities, and contra-dictions ignored in traditional designs. These complexities are present in all therapyresearch, but RCTs are able to ignore them by simplifying their data collection andanalysis. In my experience, every group design is composed of individual clientswhose change process is as rich and contradictory as the clients we have studied.
The fact that these complexities are invisible in RCTs is yet another reason to dis-trust them and to continue working toward viable alternatives that do justice toeach client’s uniqueness while still allowing us to determine whether (a) the clienthas changed, (b) whether these changes have anything to do with our work astherapists, and (c) what specific processes in therapy and in the client’s life areresponsible for these changes.
Beyond these relatively delimited research applications, HSCED raises broader issues, including the appropriate grounds for causal inference in applied settings,external validity, and the nature of causality in psychotherapy.
Causal Inference in the Absence of RCTs
It is worth noting that standard suspicions about systematic case studies ignore the fact that skilled practitioners and lay people in a variety of settings continually usegenerally effective but implicit practical reasoning strategies to make causal judgmentsabout single events, ranging from medical illnesses to lawsuits to acts of terrorism (seeSchön, 1983). For example, legal and medical practices are both fundamentally sys-tems for developing and testing causal inferences in naturalistic situations.
Thus, the task for HSCED is to develop procedures that address various possible alternative explanations for client change. Mechanistic data collection and analysisprocedures will not work. Instead, the researcher must use a combination of infor-mant (client and therapist) and observer data collection strategies, both qualitativeand quantitative. These strategies confront the researcher with multiple possible in-dicators of which he or she must make sense typically by looking for points of con-vergence and interpreting points of contradiction.
External Validity With Single Cases
Logically, what can be demonstrated by a single case such as the one I have presented is the possibility that this kind of therapy (process-experiential, specifi-cally, using primarily empathic exploration and empty chair work over the course ofabout 40 sessions) can be effective with this kind of client (male, middle-aged,European-American, intellectualizing, psychologically reactant) with this kind of prob-lem (e.g., recurrent depression with hypomanic episodes, unresolved multiple losses,current family conflicts). Predicting how effective a similar therapy would be with asimilar client would require a program of systematic replication (Sidman, 1960) and,ultimately, a summary of a collection of similar cases, analogous to precedents es-tablished by a body of case law (Fishman, 1999).
Nature of Causation in Psychotherapy
Another broad issue concerns the kinds of causal processes that are relevant to understanding change in psychotherapy. The following three propositions seem mostconsistent with how clients change over the course of therapy. First, change in psy-chotherapy involves opportunity causes (bringing about change by opening up pos-sibilities to the client) rather than coercive causes (forcing or requiring change).
Psychotherapy appears to work by offering clients occasions to engage in new orneglected ways of thinking, feeling, and acting; by promoting the desirability ofpossible changes; and by helping clients remove obstacles to desired behaviors orexperiences.
Second, if opportunity causes are the rule in therapy, then, by definition, change in therapy involves multiple contributing causes (“weak” or “soft” causation) ratherthan sole causes (“strong” or sufficient causation). After all, opportunities are notcommands and can always be rejected or simply ignored. Therapist responses intherapy sessions and even client–therapist interactions in sessions can provide at bestonly a partial explanation of client change. Other factors must be assumed to playimportant roles as well, including extratherapy life events, biological processes, andespecially client internal self-help processes. A complete interpretation of the changeprocess probably requires weaving together the different therapy and nontherapy strands into a narrative such as the one I presented at the end of the analysis sectionof this article.
Finally, the development of explanations of therapy outcome is a fundamen- tally interpretive process, involving a “double hermeneutic” (Rennie, 1999) of client(engaged in a process of self-interpretation) and researcher (engaged in a process ofinterpreting the interpreter). The double hermeneutic suggests that the client is aactually a coinvestigator, who acts always as an active self-interpreter and self-changer.
As researchers, we follow along behind, performing a second, belated act of inter-pretation, carefully sifting through the multitude of sometimes contradictory signsand indicators provided by the client. Although we are sometimes weighed downby methodology, nevertheless, it is our greatest desire to understand how our clientschange to become more effective in helping them do so.
References
Barkham, M., Rees, A., Stiles, W. B., Shapiro, D. A., ing of the Society for Psychotherapy Research, Hardy, G. E., & Reynolds, S. (1996). Dose- effect relations in time-limited psychotherapy Fishman, D. B. (1999). The case for pragmatic for depression. Journal of Consulting and Clini- psychology. New York: New York University cal Psychology, 64, 927–935.
Bohart, A. C. (2000, June). A qualitative “adjudi- Fitts, W. H., & Warren, W. L. (1996). Tennessee cational” model for assessing psychotherapy out- Self-Concept Scale (2nd ed.). Los Angeles, CA: come. Paper presented at meeting of Society for Psychotherapy Research, Chicago, IL.
Goscinny, R., & Uderzo, A. (1969). Asterix the Bohart, A. C., & Boyd, G. (1997, December). Cli- Gaul (A. Bell & D. Hockridge, Trans.). London: ents’ construction of the therapy process: A quali- Hodder Dargaud. (Originally published 1961) tative analysis. Poster presented at the meet- Greenberg, L. S. (1986). Change process research.
ing of North American Chapter of the Society Journal of Consulting and Clinical Psychology, Cook, T. D., & Campbell, D. T. (1979). Quasi- Haaga, D. A. F., & Stiles, W. B. (2000). Random- experimentation: Design and analysis issues for ized clinical trials in psychotherapy research: field settings. Chicago: Rand McNally.
Methodology, design, and evaluation. In C. R.
Derogatis, L. R. (1983). SCL-90-R administration, Snyder & R. E. Ingram (Eds.), Handbook of psy- scoring and procedures manual–II. Towson, chological change (pp. 14–39). New York: Wiley.
Hayes, S. C., Barlow, D. H., & Nelson-Gray, R. O.
Elliott, R., & James, E. (1989). Varieties of client (1999). The scientist practitioner: Research and experience in psychotherapy: An analysis of the accountability in the age of managed care (2nd literature. Clinical Psychology Review, 9, 443– ed.). Needham Heights, MA: Allyn & Bacon.
Haynes, S. N., & O’Brien, W. O. (2000). Principles Elliott, R., Shapiro, D. A., & Mack, C. (1999). Simpli- of behavioral assessment: A functional approach fied Personal Questionnaire procedure manual. to psychological assessment. New York: Plenum.
Toledo: University of Toledo, Department of Horowitz, L. M., Rosenberg, S. E., Baer, B. A., Ureño, G., & Villaseñor, V. S. (1988). Inventory Elliott, R., Slatick, E., & Urman, M. (2001). Qualita- of Interpersonal Problems: Psychometric prop- tive change process research on psychotherapy: erties and clinical applications. Journal of Con- Alternative strategies. Psychologische Beiträge, sulting and Clinical Psychology, 56, 885–892.
43, 69–111. (Also published as J. Frommer & Jacobson, N. S., & Truax, P. (1991). Clinical signifi- D. L. Rennie (Eds.), Qualitative psychotherapy re- cance: A statistical approach to defining mean- search: Methods and methodology (pp. 69–111).
ingful change in psychotherapy research. Jour- Lengerich, Germany: Pabst Science Publishers) nal of Consulting and Clinical Psychology, 59, Elliott, R., Smith, S., Magaña, C. G., Germann, J., Jersak, H., Partyka, R., Urman, M., Wagner, J., Julius Caesar, G. (1960). War commentaries of & Shapiro, D. A. (2000, June). Hermeneutic single Caesar (R. Warner, Trans.). New York: New case efficacy design: A pilot project evaluating American Library. (Original work published 51 process-experiential therapy in a naturalistic treatment series. Paper presented at the meet- Kazdin, A. E. (1981). Drawing valid inferences from case studies. Journal of Consulting and therapy for panic disorder. Paper presented at Clinical Psychology, 49, 183–192.
the meeting of North American Chapter of the Kazdin, A. E. (1998). Research design in clinical Society for Psychotherapy Research, Puerto psychology (3rd ed.). Needham Heights, MA: Peterson, D. R. (1968). The clinical study of so- Kazdin, A. E. (1999). The meaning and measure- cial behavior. New York: Appleton-Century- ment of clinical significance. Journal of Consult- ing and Clinical Psychology, 67, 332–339.
Piper, W. E. (2001). Collaboration in the new millen- Llewelyn, S. (1988). Psychological therapy as nium. Psychotherapy Research, 11, 1–11.
viewed by clients and therapists. British Jour- Rennie, D. L. (1999). Qualitative research: A mat- nal of Clinical Psychology, 27, 223–238.
ter of hermeneutics and the sociology of knowl- Mohr, L. B. (1993, October). Causation and the edge. In M. Kopala & L. A. Suzuki (Eds.), Using case study. Presented at meeting of the National qualitative methods in psychology (pp. 3–13).
Public Management Research Conference, Uni- Schneider, K. J. (1999). Multiple-case depth re- Morgan, D. L., & Morgan, R. K. (2001). Single- search. Journal of Clinical Psychology, 55, 1531– participant research design. American Psycholo- Schön, D. A. (1983). The reflective practitioner: Ogles, B. M., Lambert, M. J., & Sawyer, J. D. (1995).
How professionals think in action. New York: Clinical significance of the National Institute of Mental Health Treatment of Depression Collabo- Sidman, M. (1960). Tactics of scientific research.
rative Research Program data. Journal of Con- sulting and Clinical Psychology, 63, 321–326.
Strupp, H. H., Horowitz, L. M., & Lambert, M. J.
Partyka, R., Elliott, R., Alperin, R., Dobrenski, R., (Eds.). (1997). Measuring patient changes in Wagner, J., Castonguay, L., Watson, J., & Messer, mood, anxiety, and personality disorders: To- S. (2001, November). An adjudicated hermeneu- ward a core battery. Washington, DC: Ameri- tic single case efficacy study of brief experiential Zusammenfassung
Das hermeneutische Einzelfallwirksamkeits-Design ist ein interpretativer Ansatz zur Bewertung vor
Behandlungskausalität in Einzelfall-Therapiestudien. Dieser Ansatz verwendet eine Mischung von
quantitativen und qualitativen Methoden zur Erstellung eines Netzwerkes von Evidenzbewertungen,
das zunächst die direkten Anzeichen für kausale Beziehungen zwischen Therapieprozess und Ergebnis
aufzeigt und dann mögliche nicht-therapiebezogene Erklärungen für eine offensichtliche Veränderung
in der Therapie auf ihre Plausibilität hin bewertet. Der Autor illustriert die Methode anhand von Daten
eines depressiven Klienten mit ungelösten Verlust- und Ärgererlebnissen.
Résumé
Le modèle herméneutique de recherche d’efficacité pour un cas unique est une approche interprétative
utilisée pour évaluer les facteurs de causalité dans des cas uniques de thérapie. Cette approche se sert
d’un mélange de méthodes quantitatives et qualitatives pour créer un réseau d’évidence qui identifie
d’abord des manifestations directes de liens de causalité entre le processus thérapeutique et le résultat,
et qui évalue ensuite des explications indépendantes de la thérapie et plausibles pour un changement
apparent en thérapie. L’auteur illustre la méthode par des données d’un client déprimé s’étant présenté
avec des thèmes non résolus de perte et de colère.
Resumen
El diseño de eficacia hermenéutica para el estudio de caso es un enfoque interpretativo usado para
evaluar la causalidad en el tratamiento de casos únicos de terapia. Este enfoque usa una mezcla de
métodos cuanti y cualitativos para obtener una red de evidencia que primero identifica en forma directa
lazos causales entre proceso terapéutico y resultado y luego evalúa explicaciones no terapéuticas
plausibles para un cambio visible en terapia. El autor ilustra el método por medio de datos de un cliente
deprimido que presentó problemas de pérdida y rabia no resueltas.

Source: http://www.fepto.eu/storage/files/articole/Hermeneutic%20single%20case%20efficasity%20design.pdf

Microsoft word - alcoholic liver disease.docx

ALCOHOLIC LIVER DISEASE (ALD) Thuy Anh Le MD Mentor: Dr. Mario Chojkier 8/10/2010 EPIDEMIOLOGY • Costs of alcohol abuse are ~$185 billion/year – related to lost productivity and MVA. • Etoh accounts for 40% deaths from cirrhosis and >30% cases of HCC in US • Accounts for 50,000 deaths annually • ALD develops in female after shorter duration of drinking and lower daily alcohol intak

depauw.edu

Prescription Program Drug List — To be used by members who have a tiered drug plan. Anthem Blue Cross Blue Shield prescription drug benefits include medications available on the Anthem Drug List. Our prescription drug benefits can offer potential savings when your physician prescribes medications on the drug list. ANTHEM BLUE CROSS AND BLUE SHIELD DRUG LISTYour prescription drug benefit i

Copyright 2014 Pdf Medic Finder