Название | Remote Research |
---|---|
Автор произведения | Tony Tulathimutte |
Жанр | Управление, подбор персонала |
Серия | |
Издательство | Управление, подбор персонала |
Год выпуска | 0 |
isbn | 9781933820446 |
Case Study: Lab vs. Remote
By Julia Houck-Whitaker, Adaptive Path
(and Bolt | Peters alum)
In 2002, Bolt | Peters conducted two remote studies on the corporate Web site of a Fortune 1000 software company. Both studies used identical test plans, but one was executed in a traditional usability lab, whereas the other was conducted remotely using an online screen sharing tool.
Summary
Our comparison showed key differences in the areas of time, recruiting, and resource requirements, as well as the ability to test geographically distributed user audiences. Table 1.1 summarizes the key differences we found comparing the two methods. There appeared to be no significant differences in the quality and quantity of usability findings between remote and in-lab approaches.
Table 1.1 Overview Comparison of Lab and Remote Methods
http://www.flickr.com/photos/rosenfeldmedia/4286397757/
Detailed Comparison of Methods
Tables 1.2, 1.3, and 1.4 break down the process for each of the recruiting, testing, and analysis phases, respectively. The left-hand column describes the lab study details; the right-hand column describes the remote study details.
Table 1.2 Lab vs. Remote Recruiting
http://www.flickr.com/photos/rosenfeldmedia/4287138014/
Recruiting for the lab-based study was outsourced to a professional recruiting agency (see Table 1.3). Ten users were recruited, screened, and scheduled by G Focus Groups in San Francisco, including two extra recruits in case of no-shows. Recruiting eight users through the recruiting agency took 12 days. Agency-assisted recruiting successfully provided seven test subjects for the lab study; the eighth recruit did not fulfill the testing criteria.
Recruiting for the remote study was conducted using an online pop-up from the software company’s corporate Web site. The recruiting pop-up, hosted by the researchers, used the same questions as the G Focus Groups’ recruiting screener. Users in both studies were selected based on detailed criteria such as job title and annual company revenues. Respondents to the online screener who met the study’s qualifications were contacted in real time by the research moderators. The online recruiting method took one day and yielded eight users total from California, Utah, New York, and Oregon. Normally, the live screener requires four days of lead time to set up, but in this case it was completed for a previous project so setup was not necessary.
Table 1.3 Lab vs. Remote Environment
http://www.flickr.com/photos/rosenfeldmedia/4287138116/
The lab study was also conducted from the software company’s in-house usability lab. The recruits for the lab study went to the lab in Pleasanton, California to participate and used a Windows PC. In addition to users’ audio and screen movement capture, users’ facial expressions were also recorded.
The remote usability study was conducted using a portable lab from the software company’s headquarters in Pleasanton, California. The live recruits participated from their native environments and logged on to an online meeting allowing the moderators to view the participants’ screen movements. The users’ audio and screen movements were captured.
The lab study uncovered similar issues of similar quality and usefulness to the client when compared with the remote study results (see Table 1.4). The remote study uncovered usability issues of high value to the client. The lab method uncovered 98 key findings, compared with 116 findings in the remote study (not a statistically significant difference).
Table 1.4 Lab vs. Remote Findings
http://www.flickr.com/photos/rosenfeldmedia/4286397971/
The Case Against Remote Research
by Andy Budd
Even though we’re obviously firm advocates of remote methods, not all UX practitioners agree. Andy Budd is the creative director at Clearleft, a renowned London-based team of UX and Web design experts. Clearleft are the makers of Silverback, in-person usability testing software for interface designers and developers. Andy isn’t such a big fan of remote methods for moderated studies—here’s why.
Full disclosure: I’ve done very little remote testing, and the reason is that I’ve never found a credible need to do it. We’ve always found other ways of testing that didn’t require a remote approach. My issue is less about the negatives of remote testing and more about the positives of in-person testing.
Now, that’s not to say we test in a lab; I find that labs give a veneer of formality and scientific accuracy to studies, which they often don’t have. And testing labs and equipment are often more expensive, and tend to bog things down. So we take a grittier approach—just a meeting room and a video camera or some screen capture software.
We gain a lot of information by being in the room with people. They say 90% of communication is nonverbal. It’s about the cues in people’s tone of voice or posture. When you’re with a test subject, you pick up these signals more easily. With online video conferencing such as Skype, social conventions break down; you’re not able to read the cues that tell you when one person stopped talking or when it’s OK for another person to start talking. You get lag, and people talk over each other. Communicating remotely is difficult to do well, and it’s possible that it’s to do with our ability to use these new tools and technology. I wouldn’t be surprised—give it 20 or 30 years—when video conferencing becomes a norm and we’ve learned how to understand and read these subtle cues better. For now, there’s the potential to lose 90% of the information that’s coming through to you if you’re not testing in person.
On the Shortcomings of Remote Methods
Usability testing is all about empathy. It’s about creating a connection. That kind of empathy is difficult to create through Web conferencing, and it’s that gulf of miscommunication that makes it less attractive to me. I think there are instances where you should use remote moderated testing, often when it’s impossible to recruit users to a specific location. Recently, we were working on a project for a South American site. We wanted to speak to Brazilians, so initially we thought to do some kind of remote testing, but then realized that there was actually a large student and ex-pat Brazilian community here [in Brighton]. So instead we went to a Brazilian café and sat down and just chatted to actual Brazilian people who happened to be living in the UK. Some people said, “How can you possibly say that talking to Brazilian people in a Brighton café is exactly the same as in a favela in São Paulo?”