Practitioner's Guide to Using Research for Evidence-Informed Practice. Allen Rubin

Читать онлайн.
Название Practitioner's Guide to Using Research for Evidence-Informed Practice
Автор произведения Allen Rubin
Жанр Психотерапия и консультирование
Серия
Издательство Психотерапия и консультирование
Год выпуска 0
isbn 9781119858584



Скачать книгу

treatment (sometimes called “services as usual” or treatment as usual) control condition. Treatment effectiveness is supported if the intervention group's outcome is significantly better than the no-treatment or routine treatment's outcome.

      It is understandable that some perceive EIP as valuing evidence only if it is produced by experimental studies. That's because tightly controlled experiments actually do reside at the top of one of the research hierarchies implicit in EIP. That is, when our EIP question asks about whether a particular intervention really is effective in causing a particular outcome, the most conclusive way to rule out alternative plausible explanations for the outcome is through tightly controlled experiments. The chapters in Part II of this book examine those alternative plausible explanations and how various experimental and quasi-experimental designs attempt to control for them.

      When thinking about research hierarchies in EIP, however, we should distinguish the term research hierarchy from the term evidentiary hierarchy. Both types of hierarchies imply a pecking order in which certain types of studies are ranked as more valuable or less valuable than others. In an evidentiary hierarchy, the relative value of the various types of studies depends on the rigor and logic of the research design and the consequent validity and conclusiveness of the inferences – or evidence – that it is likely to produce.

      In contrast, the pecking order of different types of studies in a research hierarchy may or may not be connected to the validity or conclusiveness of the evidence associated with a particular type of study. When the order does depend on the likely validity or conclusiveness of the evidence, the research hierarchy can also be considered to be an evidentiary hierarchy. However, when the pecking order depends on the relevance or applicability of the type of research to the type of EIP question being asked, the research hierarchy would not be considered an evidentiary hierarchy. In other words, different research hierarchies are needed for different types of EIP questions because the degree to which a particular research design attribute is a strength or a weakness varies depending on the type of EIP question being asked and because some EIP questions render some designs irrelevant or infeasible.

      Qualitative studies tend to employ flexible designs and subjective methods – often with small samples of research participants – in seeking to generate tentative new insights, deep understandings, and theoretically rich observations. In contrast, quantitative studies put more emphasis on producing precise and objective statistical findings that can be generalized to populations or on designs with logical arrangements that are geared to testing hypotheses about whether predicted causes really produce predicted effects. Some studies combine qualitative and quantitative methods, and thus are called mixed-method studies.

      Some scholars who favor qualitative inquiry misperceive EIP as devaluing qualitative research. Again, that misperception is understandable in light of the predominant attention given to causal questions about intervention effectiveness in the EIP literature, and the preeminence of experiments as the “gold standard” for sorting out whether an intervention or some other explanation is really the cause of a particular outcome. That misperception is also understandable because when the EIP literature does use the term evidentiary hierarchy or research hierarchy it is almost always in connection with EIP questions concerned with verifying whether it is really an intervention – and not something else – that is the most plausible cause of a particular outcome. Although the leading texts and articles on the EIP process clearly acknowledge the value of qualitative studies, when they use the term hierarchy it always seems to be in connection with causal questions for which experiments provide the best evidence.

      A little later in this chapter, we examine why experiments reside so high on the evidentiary hierarchy for answering questions about intervention effectiveness. Right now, however, we reiterate the proposition that more than one research hierarchy is implicit in the EIP process. For some questions – like the earlier one about understanding homeless shelter experiences, for example – we'd put qualitative studies at the top of a research hierarchy and experiments at the bottom.

      Countless specific kinds of EIP questions would be applicable to a hierarchy where qualitative studies might reside at the top. We'll just mention two more examples: Are patient-care staff members in nursing homes or state hospitals insensitive, neglectful, or abusive – and if so, in what ways? To answer this question, a qualitative inquiry might involve posing as a resident in such a facility.

      Chapter 1 identifies and discusses six types of EIP questions. If research hierarchies were to be developed for each of these types of questions, experimental designs would rank high on the ones about effectiveness, but would either be infeasible or of little value for the others. Qualitative studies would rank low on the ones about effectiveness, but high on the one about understanding client experiences.

      Let's now look further at some types of research studies that would rank high and low for some types of EIP questions. In doing so, let's save the question about effectiveness for last. Because the fifth and sixth types of EIP questions – about costs and potential harmful effects – tend to pertain to the same types of designs as do questions of effectiveness, we'll skip those two so as to avoid redundancy. The point in this discussion is not to exhaustively cover every possible type of design for every possible type of EIP question. Instead, it is just to illustrate how different types of EIP questions imply different types of research designs and that the research hierarchy for questions about effectiveness does not apply to other types of EIP questions. Let's begin with the question: What factors best predict desirable and undesirable outcomes?