Monday, September 24, 2007

The Therapeutic Relationship

When I finished the readings for this week, I was actually kind of disappointed. There wasn't much that I got really excited or fired up about. I thought the articles were a little boring and I was worried that I would have nothing to say in my blog! One thing that really bothered me in the Kirschenbaum & Jourdan (2005) artcile was that the authors partly based their evaluation of the status of Rogers' theories on a PsycInfo search! I thought the arguments using empirical research as a basis were pretty weak in both articles. Maybe I'm just primed to expect some serious empirical evidence as a result of the past two weeks, but I just didn't find either of the articles' theory and research support to be very compelling. I also felt like Castonguay et al. (2006) made alot of sweeping generalizations. It was a nice review paper, but I'm not too convinced that just because measures of the alliance exist, that all therapists should be using them, for instance (pg. 273). Or that there really is enough evidence to warrant the "forecasting" of patients that clinicians may have difficulty working with (pg. 272). This suggestion brings me back to Meehl's credentialed knowledge arguments.

However, the more I thought about it, the more I realized that even though the therapeutic alliance may not be an incredibly exciting topic and that it may not be firmly grounded in scientific research, I think there were some important points to consider, especially for us clinical folk who will most likely be putting these techniques to use next year! I mention "techniques" because this is what I would argue the alliance should be viewed as (which I'm sure people already have, I'm just not aware of the research in this area). Rather than a "school of thought" or a particular orientation, the alliance seems most valuable for teaching therapists about appropriate strategies that COULD foster a better, more positive working relationship during therapy, which in turn MAY contribute to client improvement. For instance, an "affective bond or positive attachment" (pg. 272) most likely will not hurt the chances of positive change, but the extent of its necessity is still debatable. Don't misunderstand me...I actually personally feel like the client-therapist relationship is extremely important and should not be taken lightly, I guess I would just be hesitant to place too much emphasis on it if it means that other important therapeutic techniques are ignored (expecially for those manualized treatments!)

Monday, September 17, 2007

EST's: Part Deux

Just when I thought the logic of EST's made total and complete sense...I guess being able to critically think about varying positions is what grad school is all about, but man, its making my head spin. I'm discovering that I can be swayed pretty easily by "the written word" which is probably a good and bad thing, depending on the situation! Anyway, I wanted to broadly touch on a few issues from the readings this week, mainly from the Weston et al.(2004) article, the first issue being about the "appropriate" sequence to test a treatment. From the Chambless and Holon (1998) article last week, we learned that the standard is to first prove that a treatment is efficacious in the lab and only after this has been accomplished can the treatment be brought out into the community (to be proven effective). However, as Weston et al. (2004) point out, starting with such "pure" samples in highly controlled environments may seriously limit the generalizability of a treatment. Along these lines, RCTs typically have pretty stringent inclusion criteria and may exclude people with more complex, co-occurring symtoms, particularly those that show some personality disturbances. I do not think it is reasonable for one to assume that because a treatment has been shown to be efficacious for highly specific sample with highly specific symptoms/conditions in a lab that it will necessarily fare just as well with all the variablity and comorbitity (which the article states is the norm rather than the exception) that appears in clinical settings and communities. Having said that, I think I need to re-evaluate what I said in my blog last week about the necessity of treatment specificity! I do think it is important to identify exactly how a particular treatment works for some specific disorders, but I'm starting to believe that there are conditions that will benefit more from integrative strategies that are able to address/treat several different aspects of a person's condition. I like the suggestion at the end of the Weston et al. (2004) article about "using practice as a natural laboratory" (pg. 657).

This idea takes me to my next thought about the integration of theory and practice, which is the basis of the Sehcrest & Smith (1994) article. My various mentors over the past few years have all reiterated to me the importance of guiding clinical practice on sound research. While I completely argree, the readings this week underscored the importance of a more "transactional philosophy of clinical science" (Weston et al., 2004) where both science and practice can inform eachother. Just as clinicians need to be aware of and up-to-date on current research, I think its fair set the same standards for researchers. In the words of Sechrest & Smith (1994), "...psychotherapy should become an integral part of all psychology" (pg. 4).

Monday, September 10, 2007

EST's

With so much variability in symptoms and behaviors within particular psychological disorders, it makes sense that researchers and clinicians are identifying more specific treatments that target specific behaviors. However, standards for evaluating the effectiveness of these treatments are necessary and the Chambless & Hollon (1998) article attempts to provide structure for us in doing so. The authors acknowledge the complexity and magnitude of this task and point out that many disagreements will likely arise. Therefore, I'd like to point out a few of my concerns with their guidelines. First, the authors deviate from guidelines set forth by the Division 12 Task Force and state that "...if a treatment works, for whatever reason, and if this effect can be replicated by multiple independent groups, then the treatment is likely to be of value clinically..."(pg. 8). I do not think its enough to know simply that a treatment works. In fact, merely proving that a treatment is more beneficial than no treatment at all seems to be assuming psychotherapy equivalence, which is a questionable theory discussed in the Hunsley & DiGiulio (2002) article. Researchers should be able to identify a treatments' specific nature and the mechanisms underlying the particular treatment's success in improvement. Because there could be so many external factors for a client's improvment other than the treatment itself, evidence of treatment specificity seems essential to me.

Another concern I had involves the specification of a treatment population. While I agree with the authors that it is necessary to show that a treatment is efficacious for a particular group of poeple, I had some issues with their emphasis on the diagnostic criteria in the DSM as the means for defining a population. As we talked about last week, people with the same diagnoses could manifest very different symptoms (or at least the degree of the manifestation of certain symptoms could be different) and thus respond very differently (or not at all) to the same treatment. For instance, people with social skills problems could be exhibiting social problems because of an actual social skill deficit and would respond well to social skills training, wheras others may be experiencing soical problems due to social anxiety and would respond well to both systematic desensitization or social skills training (Trower, 1978)--this was actually an example put forth in an article for our assessment class called "The Treatment Utility of Assessment" (Hayes, Nelson, & Jarrett, 1987) which fits pretty nicely into this week's topic. In any case, while Chambless & Hollon (1998) do acknowledge other factors that should also be considered such as comorbidity and the age range of a population, I'd like to see more emphasis placed on defining treatment samples based on symptom homogeneity, not just diagnoses.

Monday, September 3, 2007

Comment on the DSM readings

Because I found the Widiger & Clark (2000) article to be the most interesting and controversial of the three readings, I will mostly comment on a few points raised by the authors. The article detailed several areas that need improvement or change in DSM-V, and while it may not have been authors' intent to comment on exactly how these changes should be accomplished, it left many questions unanswered for me. For instance, in the discussion about determining what is meant by a clinically significant impairment, the diagnosis of mental retardation is used to illistrate the possibility of using points of demarcation along continuous distributions of functioning. However, as the article points out, a question then arises about how to come to a consensus about exactly where the point of demarcation should be for specific disorders. In regards to an IQ of below 70 being necessary to diagnose MR, the whole idea of IQ itself is relatively controversial- what does IQ really measure, predict, and mean? (Neisser, 1996).
In somewhat relation to this, I found the discussion about including laboratory findings in diagnostic criterion sets interesting, but also puzzling. It certainly does not make much sense to include autonomic functioning in the diagnostic criteria for some disorders, yet require no physiological tests be done to prove that they exist. For instance, the article mentions panic attacks as an example-if a client tells a clinician that he/she feels nauseous, sweats, gets dizzy, etc. when nervous, is the clinician supposed to take their word for it? In addition, the article points out there is no reference in DSM-IV about standardized assessment instruments, such as brain imaging techniques, to use in making diagnoses, which seem necessary to me if the DSM is going to reference neurophysiological factors involved in certian mental disorders, such as references to neurotransmitters in depression. However, the article also brings up valid points about the availablity and costs of using laboratory data. Theoretically as a scientist, it makes sense to me to incorporate lab tests and findings into the dignostic criteria, but I'm not sure how practical or realistic it is to require for all diagnoses. This is a problem that I think needs to be addressed- just because it may not be practical and may be hard to implement doesn't mean its not necessary.