Practitioner's Guide to Using Research for Evidence-Informed Practice. Allen Rubin

Читать онлайн.
Название Practitioner's Guide to Using Research for Evidence-Informed Practice
Автор произведения Allen Rubin
Жанр Психотерапия и консультирование
Серия
Издательство Психотерапия и консультирование
Год выпуска 0
isbn 9781119858584



Скачать книгу

the intervention. This doesn't give us much information about how any given individual might have responded to the intervention. In practice, we are interested in successfully treating each and every client, not just the average.

      Moreover, we often don't know why some clients don't benefit from our most effective interventions. Suppose an innovative dropout prevention program is initiated in one high school, and 100 high-risk students participate in it. Suppose a comparable high school provides routine counseling services to a similar group of 100 high-risk students. Finally, suppose only 20 (20%) of the recipients of the innovative program drop out, as compared to 40 (40%) of the recipients of routine counseling. By cutting the dropout rate in half, the innovative program would be deemed very effective. Yet it failed to prevent 20 dropouts.

      During the 1970s and 1980s, assertive case management came to be seen as a panacea for helping severely mentally ill patients dumped from state hospitals into communities in the midst of the deinstitutionalization movement. Studies supporting the effectiveness of assertive case management typically were carried out in states and communities that provided an adequate community-based service system for these patients. Likewise, ample funding enabled the case managers to have relatively low caseloads, sometimes less than 10 (Rubin, 1992). One study assigned only two cases at a time to their case managers and provided them with discretionary funds that they could use to purchase resources for their two clients (Bush et al., 1990). These were high-quality studies, and their results certainly supported the effectiveness of assertive case management when provided under the relatively ideal study conditions.

      Rubin had recently moved from New York to Texas at the time that those studies were emerging. His teaching and research in those days focused on the plight of the deinstitutionalized mentally ill. Included in his focus was the promise of, as well as issues in, case management. His work brought him into contact with various case managers and mental health administrators in Texas. They pointed out some huge discrepancies between the conditions in Texas compared to the conditions under which case management was found to be effective in other (northern) states. Compared to other states, and especially to those states where the studies were conducted, public funding in Texas for mental health services was quite meager. Case managers in Texas were less able to link their clients to needed community services due to the shortage of such services. Moreover, the Texas case managers lamented their caseloads, which they reported to be well in excess of 100 at that time. One case manager claimed to have a caseload of about 250! To these case managers, the studies supporting the effectiveness of assertive case management elsewhere were actually causing harm in Texas. That is, those studies were being exploited by state politicians and bureaucrats as a way to justify cutting costlier direct services with the rationale that they are not needed because of the effectiveness of (supposedly cheaper) case management services.

      In light of the influence of practice context, deciding which intervention to implement involves a judgment call based in part on the best evidence – in part on your practice expertise; in part on your practice context; and in part on the idiosyncratic characteristics, values, and preferences of your clients. While you should not underestimate the importance of your judgment and expertise in making the decision, neither should you interpret this flexibility as carte blanche to allow your practice predilections to overrule the evidence. The fact that you are well trained in and enjoy providing an intervention that solid research has shown to be ineffective or much less effective than some alternative is not a sufficient rationale to automatically eschew alternative interventions on the basis of your expertise. Likewise, you should not let your practice preferences influence your appraisal regarding which studies offer the best evidence.

      One of the thornier issues in making your intervention decision concerns the number of strong studies needed to determine which intervention has the best evidence. For example, will 10 relatively weak, but not fatally flawed, studies with positive results supporting Intervention A outweigh one very strong study with positive results supporting Intervention B? Will one strong study suggesting that Intervention C has moderate effects outweigh one or two relatively weak studies suggesting that Intervention D has powerful effects? Although we lack an irrefutable answer to these questions, many EIP experts would argue that a study that is very strong from a scientific standpoint, such as one that has only a few minor flaws, should outweigh a large number of weaker studies containing serious (albeit perhaps not fatal) flaws. Supporting this viewpoint is research that suggests that studies with relatively weaker methodological designs can overestimate the degree of effectiveness of interventions (e.g., Cuijpers et al., 2010; Wykes et al., 2008). If you find that Intervention A is supported by one or two very strong studies and you find no studies that are equally strong from a scientific standpoint in supporting any alternative interventions, then your findings would provide ample grounds for considering Intervention A to have the best evidence.

      However, determining that Intervention A has the best evidence is not the end of the story. Future studies might refute the current ones or might show newer interventions to be more effective than Intervention A. Although Intervention A might have the best evidence for the time being, you should remember that EIP is an ongoing process. If you continue to provide Intervention A for the next 10 or more years, your decision to do so should involve occasionally repeating the EIP process and continuing to find it to have the best supportive evidence.

      There may be reasons why Intervention A – despite having the best evidence – is not the best choice for your client. As discussed, your client's characteristics or your practice context might contraindicate Intervention A and thus influence you to select an alternative intervention with the next best evidence base. And even if you conclude that Intervention A is the best choice for your client, you should inform the client about the evidence and involve the client in making decisions about which interventions to use. We are not suggesting that you overwhelm clients with lengthy, detailed descriptions of the evidence. You might just tell them that based on the research so far, Intervention A appears to have the best chance