Research in the Wild. Paul Marshall

Читать онлайн.
Название Research in the Wild
Автор произведения Paul Marshall
Жанр Компьютеры: прочее
Серия Synthesis Lectures on Human-Centered Informatics
Издательство Компьютеры: прочее
Год выпуска 0
isbn 9781681731971



Скачать книгу

work out what to do in order to complete the tasks set for them, by following instructions given. They may find themselves having to deal with various “demand characteristics”—the cues that make them aware of what the experimenter expects to find, wants to happen or how they are expected to behave. As such, ecological validity of lab studies can be less reliable, as participants perform to conform to the experimenter’s expectations.

      A downside of evaluating technology in situ, however, is the researcher losing control over how it will be used or interacted with. Tasks can be set in a lab and predictions made to investigate systematically how participants manage to do them, when using a novel device, system, or app. When in the wild, however, participants are typically given a device to use without any set tasks provided. They may be told what it can do and given instructions on how to use it but the purpose of evaluating it in a naturalistic setting is to explore what happens when they try to use it in this context—where there may be other demands and factors at play. However, this can often mean that only a fraction of the full range of functionality, that has been designed as part of the technology, is used or explored, making it difficult for the researchers to see whether what has been designed is useful, usable, or capable of supporting the intended interactions.

      To examine how much is lost and gained, Kjeldskov et al. (2004) conducted a comparative study of a mobile system designed for nurses in the lab vs. in the wild. They found that both settings revealed similar kinds of usability problems but that more were discovered in the lab than in the wild study. However, the cost of running a study in the wild was considerably greater than in the lab, leading them to question “Was it worth the hassle?” They suggest that in the wild studies might be better suited for obtaining initial insights for how to design a new system that can then feed into the requirements gathering process, while early usability testing of a prototype system can be done in the confines of the lab. This pragmatic approach to usability testing and requirements gathering makes good sense when considering how best to develop and progress a new system design. In a follow-up survey of research on mobile HCI using lab and in the wild studies, Kjeldskov and Skov (2014) concluded that it is not a matter of one being better than the other but when best to conduct a lab study vs. an in the wild study. Furthermore, they conclude that when researchers go into the wild they should “go all the way” and not settle for some “half-tame” setting. Only by carrying out truly wild studies can researchers experience and understand real-world use.

      Findings from other RITW user studies have shown how they can reveal a lot more than identifying usability problems (Hornecker and Nicol, 2012). In particular, they enable researchers to explore how a range of factors can influence user behavior in situ—in terms of how people notice, approach, and decide what to do with a technology intervention—either one they are given to try or one they come across—that goes beyond the scope of what is typically able to be observed in a lab-based study. Rogers et al. (2007) found marked differences in usability and usefulness when comparing a mobile device in the wild and in the lab; the mobile device was developed to enable groups of students to carry out environmental science, as part of a long-term project investigating ecological restoration of urban regions. The device provided interactive software that allowed a user to record and look up relevant data, information visualizations, and statistics. The device was intended to replace the existing practice of using a paper-based method of recording measurements of tree growth when in the field. Placing the new mobile device in the palms of students on a cold spring day revealed a whole host of unexpected, context-based usability and user experience problems. Placing the device in the palms of students on a hot summer day revealed a quite different set of unexpected, context-based usability and user experience problems. The device was used quite differently for the different times of year, where foliage and other environmental cues vary and affect the extent to which a tree can be found and identified.

      Other studies have also found how people will often approach and use prototypes differently in the wild compared with in a lab setting (e.g., Brown et al., 2011; Peltonen et al., 2008; van der Linden et al., 2011). People are often inventive and creative in what they do when coming across a prototype or system, but also can get frustrated or confused, in ways that are difficult to predict or expect from lab-based studies (Marshall et al., 2011). Van der Linden et al. (2011) also observed different behaviors—not evident from their lab-based studies—when investigating how haptic technology could improve children’s learning to play the violin at school. An in situ study of their Music-Jacket system showed how real-time vibrotactile feedback was most effective when matched to tasks selected by their teachers to be at the right level of difficulty—rather than what the researchers thought would be right for them. Similarly, Gallacher et al. (2015) discovered quite different findings when they ran the same in the wild study in different places. Based on the differing outcomes from lab studies and in in the wild approaches, Rogers et al. (2013) questioned whether findings from controlled settings can transfer to real-world settings.

      In summary, in situ studies can provide new ways of thinking about how to scope and conduct research. Compared with running experiments and usability studies, where researchers try to predict in advance performance and the likelihood or kind of usability errors, running in situ studies nearly always provide unexpected findings about what humans might or might not do when confronted with a new technology intervention. Even when experiments are run in the wild, non-significant findings can be most informative. Part of the appeal of RITW is uncovering the unexpected rather than confirming what is hoped for or already known.

      RITW is eclectic in what it does and what it seeks to understand. Such an unstructured approach to research might seem unwieldy, lacking the rigor and commitment usually associated with a given epistemology. However, this broad church stance does not mean sloppiness or lowering of standards; rather, it can open up new possibilities for conducting far-reaching, impactful, and innovative research. To help frame RITW we have developed a generic framework. Figure 1.1 depicts RITW in terms of four core bases that connect to each other. These are regarded as starting places from which to scope and operationalize the research, in terms of:

      1. technology,

      2. design,

      3. in situ studies, and

      4. theory.

      Each can inform the others to situate, shape, and progress the research. For example, designing a new activity (e.g., collaborative learning) can be done by working alongside others (e.g., participatory design), leading to the development of a new technology. The findings from an in situ study (e.g., how people search for information on the fly using their smartphones) can inform new theory (e.g., augmented memory). An existing theory (e.g., attention) can inform the design of a new app intended to be used to measure how people multitask in their everyday lives when using smartphones, tablets, and laptops. The design of a new technology (e.g., augmented reality) can be used to enhance a social activity in the wild (e.g., how families learn about the ecology of woodlands together). It should be stressed, however, that the RITW framework is not meant to be prescriptive, in terms of which base to start from, or what methods and analytic lens to use, when conducting research. The selection of these depends on the motivation for the research, its scoping, the available funding and resources, and expected outcomes.

Image

      Figure 1.1: Research in the wild (RITW) framework.

      There are many ways of conducting research in the wild. An initial challenge is to scope the research to determine what can be realistically discovered or demonstrated, which methods to use to achieve this and what to expect when using them. Sometimes, it might involve deploying hundreds of prototypes in people’s homes (e.g., Gaver et al., 2016) to observe the varied adoptions and appropriations of many people rather than those of a few. Other times, it entails months of community-building and stakeholder engagement in order to build up trust and commitment before studying the outcome of an intervention they propose or a disruption