Practitioner's Guide to Using Research for Evidence-Informed Practice. Allen Rubin

Читать онлайн.
Название Practitioner's Guide to Using Research for Evidence-Informed Practice
Автор произведения Allen Rubin
Жанр Психотерапия и консультирование
Серия
Издательство Психотерапия и консультирование
Год выпуска 0
isbn 9781119858584



Скачать книгу

href="#ulink_77733ff4-950a-53ad-9d98-0d306a1cfc48"> 1.8.1 What about Unethical Research? Key Chapter Concepts Review Exercises Additional Readings

      If you approach this topic with an open mind, and if you actually look for research evidence that can enhance your practice, you'll find many scientific studies that can help you to become more effective in your practice and to avoid doing harm. Seeking those studies and critically appraising them are part of what is called evidence-informed practice(EIP).

      The term evidence-informed practice was more commonly called evidence-based practice when it became fashionable near the end of the last century. The main ideas behind it, however, are really quite old. As early as 1917, for example, in her classic text on social casework, Mary Richmond discussed the use of research-generated facts to guide the provision of direct clinical services as well as social reform efforts.

      Also quite old is the skepticism about the notion that your practice experience and expertise – that is, your practice wisdom – are by themselves a sufficient foundation for effective practice. That skepticism does not imply that your practice experience and expertise are irrelevant and unnecessary – just that they alone are not enough.

      Perhaps you don't share that skepticism. In fact, it's understandable if you even resent it. Despite the existence of research studies showing that some intervention approaches are ineffective and perhaps harmful, students learning about clinical practice have long been taught that to be an effective practitioner, they must believe in their own effectiveness as well as the effectiveness of the interventions they employed. Chances are that you have learned this, too, either in your training or through your own practice experience. It stands to reason that clients will react differently depending on whether they are being served by practitioners who are skeptical about the effectiveness of the interventions they provide versus practitioners who believe in the effectiveness of the interventions and are enthusiastic about them.

      But it's hard to maintain optimism about your effectiveness if influential sources – like research-oriented scholars or managed care companies – express skepticism about the services you provide. Such skepticism was catalyzed by a notorious research study by Eysenck (1952), which concluded that psychotherapy was not effective (at least not in those days). Although various critiques of Eysenck's analysis later emerged that supported the effectiveness of psychotherapy, maintaining optimism was not easy in the face of various subsequent research reviews that shared Eysenck's conclusions about different forms of human services (Fischer, 1973; Mullen & Dumpson, 1972). Those reviews, in part, helped usher in what was then called an age of accountability – a precursor of the current EIP era.

      The main idea behind this so-called age was the need to evaluate the effectiveness of all human services. It was believed that doing so would help the public learn “what bang it was getting for its buck” and, in turn, lead to discontinued funding for ineffective programs and continued funding for effective ones. Thus, that era was also known as the program evaluation movement. It eventually became apparent, however, that many of the ensuing evaluations lacked credibility due to serious flaws in their research designs and methods – flaws that often stemmed from biases connected to the vested interests of program stakeholders. Nevertheless, many scientifically rigorous evaluations were conducted, and many had encouraging results supporting the effectiveness of certain types of interventions.

      The accumulation of scientifically rigorous studies showing that some interventions appear to be more effective than others helped spawn the EIP movement. In simple terms, the EIP movement encourages and expects practitioners to make practice decisions – especially about the interventions they provide – in light of the best scientific evidence available. In other words, practitioners might be expected to provide interventions whose effectiveness has been most supported by rigorous research and perhaps to eschew interventions that lack such support – even if it means dropping favored interventions with which they have the most experience and skills.

      The preceding paragraph used the words in light of the best scientific evidence, instead of implying that the decisions had to be dictated by that evidence. That distinction is noteworthy because some mistakenly view EIP in an overly simplistic cookbook fashion that seems to disregard practitioner expertise and practitioner understanding of client values and preferences. For example, the forerunner to EIP, EBP was commonly misconstrued to be a cost-cutting tool used by third-party payers that uses a rigid decision-tree approach to making intervention choices irrespective of practitioner judgment. Perhaps you have encountered that view in your own practice (or in your own healthcare) when dealing with managed care companies that have rigid rules about what interventions must be employed as well as the maximum number of sessions that will be reimbursed. If so, you might fervently resent the EBP concept, and who could blame you! Many practitioners share that resentment.

      Managed care companies that interpret EBP in such overly simplistic terms can pressure you to do things that your professional expertise leads you to believe are not in your clients' best interests. Moreover, in a seeming disregard for the scientific evidence about the importance of relationship factors and other common factors that influence positive outcomes, managed care companies can foster self-doubt about your own practice effectiveness when you do not mechanically provide the interventions on their list of what they might call “evidence-based practices.” Such doubt can hinder your belief in what you are doing and in turn hinder the more generic relationship factors that can influence client progress as much as the interventions you employ. Another problem with the list approach is its potential to stifle innovations in practice. Limiting interventions to an approved list means that novel practices are less likely to be developed and tested in the field. As you read on, you will find that EIP is a much more expansive and nuanced process than simply choosing an intervention from a list of anointed programs and services.

      The foregoing, overly simplistic view of EBP probably emanated from the way it was defined originally in medicine in the 1980s (Barber, 2008; Rosenthal, 2006). Unfortunately, the list or cookbook approach to EBP has probably stuck around because it seemed like a straightforward approach to making good practice decisions. It's much simpler for funders and others to implement and monitor whether practitioners are using an approved intervention than it is to implement and monitor the complexities of the EIP process. For example, one study found that mental health authorities in six states mandated the use of specific children's mental health interventions