Название | Practitioner's Guide to Using Research for Evidence-Informed Practice |
---|---|
Автор произведения | Allen Rubin |
Жанр | Психотерапия и консультирование |
Серия | |
Издательство | Психотерапия и консультирование |
Год выпуска | 0 |
isbn | 9781119858584 |
It is understandable that some perceive EIP as valuing evidence only if it is produced by experimental studies. That's because tightly controlled experiments actually do reside at the top of one of the research hierarchies implicit in EIP. That is, when our EIP question asks about whether a particular intervention really is effective in causing a particular outcome, the most conclusive way to rule out alternative plausible explanations for the outcome is through tightly controlled experiments. The chapters in Part II of this book examine those alternative plausible explanations and how various experimental and quasi-experimental designs attempt to control for them.
When thinking about research hierarchies in EIP, however, we should distinguish the term research hierarchy from the term evidentiary hierarchy. Both types of hierarchies imply a pecking order in which certain types of studies are ranked as more valuable or less valuable than others. In an evidentiary hierarchy, the relative value of the various types of studies depends on the rigor and logic of the research design and the consequent validity and conclusiveness of the inferences – or evidence – that it is likely to produce.
In contrast, the pecking order of different types of studies in a research hierarchy may or may not be connected to the validity or conclusiveness of the evidence associated with a particular type of study. When the order does depend on the likely validity or conclusiveness of the evidence, the research hierarchy can also be considered to be an evidentiary hierarchy. However, when the pecking order depends on the relevance or applicability of the type of research to the type of EIP question being asked, the research hierarchy would not be considered an evidentiary hierarchy. In other words, different research hierarchies are needed for different types of EIP questions because the degree to which a particular research design attribute is a strength or a weakness varies depending on the type of EIP question being asked and because some EIP questions render some designs irrelevant or infeasible.
Experiments get a lot of attention in the EIP literature because so much of that literature pertains to questions about the effectiveness of interventions, programs, or policies. However, not all EIP questions imply the need to make causal inferences about effectiveness. Some other types of questions are more descriptive or exploratory in nature and thus imply research hierarchies in which experiments have a lower status because they are less applicable. Although nonexperimental studies might offer less conclusive evidence about cause and effect, they can reside above experiments on a research hierarchy for some types of EIP questions. For example, Chapter 1 discusses how some questions that child welfare administrators might have could best be answered by nonexperimental studies. It also discusses how a homeless shelter administrator who is curious about the reasons for service refusal might seek answers in qualitative studies. Moreover, even when we seek to make causal inferences about interventions, EIP does not imply a black-and-white evidentiary standard in which evidence has no value unless it is based on experiments. For example, as interventions and programs are developed and refined, there is a general progression of research from conceptual work to pilot testing for feasibility and acceptability, toward larger and more rigorous efficacy and effectiveness studies. Oftentimes smaller, less tightly controlled intervention studies are conducted when interventions, programs, and policies are in development. These designs don't reflect poor-quality research, but rather a common progression across the development of new policies, programs, and interventions. Again, there are various shades of gray, and thus various levels on a hierarchy of evidence regarding the effects of interventions, as you will see throughout this book.
3.2 Qualitative and Quantitative Studies
Qualitative studies tend to employ flexible designs and subjective methods – often with small samples of research participants – in seeking to generate tentative new insights, deep understandings, and theoretically rich observations. In contrast, quantitative studies put more emphasis on producing precise and objective statistical findings that can be generalized to populations or on designs with logical arrangements that are geared to testing hypotheses about whether predicted causes really produce predicted effects. Some studies combine qualitative and quantitative methods, and thus are called mixed-method studies.
Some scholars who favor qualitative inquiry misperceive EIP as devaluing qualitative research. Again, that misperception is understandable in light of the predominant attention given to causal questions about intervention effectiveness in the EIP literature, and the preeminence of experiments as the “gold standard” for sorting out whether an intervention or some other explanation is really the cause of a particular outcome. That misperception is also understandable because when the EIP literature does use the term evidentiary hierarchy or research hierarchy it is almost always in connection with EIP questions concerned with verifying whether it is really an intervention – and not something else – that is the most plausible cause of a particular outcome. Although the leading texts and articles on the EIP process clearly acknowledge the value of qualitative studies, when they use the term hierarchy it always seems to be in connection with causal questions for which experiments provide the best evidence.
A little later in this chapter, we examine why experiments reside so high on the evidentiary hierarchy for answering questions about intervention effectiveness. Right now, however, we reiterate the proposition that more than one research hierarchy is implicit in the EIP process. For some questions – like the earlier one about understanding homeless shelter experiences, for example – we'd put qualitative studies at the top of a research hierarchy and experiments at the bottom.
Countless specific kinds of EIP questions would be applicable to a hierarchy where qualitative studies might reside at the top. We'll just mention two more examples: Are patient-care staff members in nursing homes or state hospitals insensitive, neglectful, or abusive – and if so, in what ways? To answer this question, a qualitative inquiry might involve posing as a resident in such a facility.
A second example might be: How do parents of mentally ill children perceive the way they (the parents) are treated by mental health professionals involved with their child? For example, do they feel blamed for causing or exacerbating the illness (and thus feel more guilt)? Open-ended and in-depth qualitative interviews might be the best way to answer this question. (Administering a questionnaire in a quantitative survey with a large sample of such parents might also help.) We cannot imagine devising an experiment for such a question, and therefore again would envision experiments at the bottom of a hierarchy in which qualitative interviewing (or quantitative surveys) would be at or near the top.
3.3 Which Types of Research Designs Apply to Which Types of EIP Questions?
Chapter 1 identifies and discusses six types of EIP questions. If research hierarchies were to be developed for each of these types of questions, experimental designs would rank high on the ones about effectiveness, but would either be infeasible or of little value for the others. Qualitative studies would rank low on the ones about effectiveness, but high on the one about understanding client experiences.
Let's now look further at some types of research studies that would rank high and low for some types of EIP questions. In doing so, let's save the question about effectiveness for last. Because the fifth and sixth types of EIP questions – about costs and potential harmful effects – tend to pertain to the same types of designs as do questions of effectiveness, we'll skip those two so as to avoid redundancy. The point in this discussion is not to exhaustively cover every possible type of design for every possible type of EIP question. Instead, it is just to illustrate how different types of EIP questions imply different types of research designs and that the research hierarchy for questions about effectiveness does not apply to other types of EIP questions. Let's begin with the question: What factors best predict desirable and undesirable outcomes?