Название | Eye Tracking the User Experience |
---|---|
Автор произведения | Aga Bojko |
Жанр | Личные финансы |
Серия | |
Издательство | Личные финансы |
Год выпуска | 0 |
isbn | 9781933820910 |
While it is compelling to want to use eye tracking to uncover usability problems that were not uncovered otherwise, this type of insight is not as prevalent as you might think. If there really is confusion, someone eventually will make a mistake or verbalize uncertainty. The higher the sample size in your study, the more likely that is to happen. Iterative testing also increases chances of these problems revealing themselves in more obvious ways. But, if all you have resources for is one small usability test, digging deeper into the eye tracking data may expose participant difficulties that are important to address. Chapter 12 provides more information on how to find usability issues based on eye tracking data even when other methods may be pointing to a problem-free experience.
Explaining Usability Problems
More often than for detecting “unobservable” problems, eye tracking is used to explain the source of usability problems that have been detected with conventional methods. For example, if your study found that participants would not click on something they should have, and it is unclear why, a follow-up study using eye tracking could explore this behavior in greater depth. Even if there has been no previous research, your study may reveal failures or inefficiencies that eye tracking can shed some light on. In anticipation of that, you may want to add an eye tracking component to the study, especially if there is no time or budget for follow-up research, should additional questions arise.
Here are some usability problem-related questions that eye tracking might help answer:
• Why did the study participants make an incorrect action?
Imagine you just conducted a usability study of a website and discovered that very few participants selected the link that enabled them to successfully complete a certain task. This could have occurred because the correct link was difficult to find, or because it was labeled or represented in a way that made it difficult to associate with the task goal. Making a design recommendation would require identifying the source of the problem.
If participants never looked at the correct link, you might attribute the problem to the link’s poor visibility. Depending on the gaze pattern and behavioral information, you could narrow down the cause to the link’s poor location, its poor saliency, other distracting elements, or even overall clutter. But if the correct link was noticed (and not selected), the problem could have resulted from poor labeling, or the fact that the link did not appear to be clickable.
The “Website of a Professional Organization” case study is an example of how eye tracking helped explain why participants kept selecting the wrong link on a Web page.
• Why did the correct action take longer than expected?
While the previous question focuses on failure, this one is about success, although inefficient success. Let’s say that participants found the correct button on a printer, but their search took a lot longer than expected. Again, several explanations, similar to those listed previously, are possible. Eye tracking can help narrow down these explanations by offering a more detailed picture of what happened before the desired action finally took place.
The “Website of a Professional Organization” case study also shows how eye tracking helped explain why participants spent longer looking for a correct link using one of the tested homepage designs.
• Why did participants fail to extract certain information?
The observable outcomes that eye tracking can be used to explain are not limited to physical actions such as mouse clicks or button presses. They also include participants’ comments, such as responses to comprehension questions, which are often asked during testing of instructional material, paper or online bills, and packaging. Eye tracking can shed light on the reasons an interface failed to convey the information it was supposed to convey. For example, did the participants not know what their water usage was because they didn’t understand the chart on the bill or because they overlooked the chart entirely?
The “Car Charger Packaging” case study describes how eye tracking provided an explanation as to why study participants did not know what was in the package they were shown.
Questions such as the three mentioned previously can come up either prior to the study because of known problems, or during or after data collection as a result of observed problems. How are the answers to these questions actionable? If you understand the origin of the problem, you can make better, more specific recommendations and reduce the trial and error sometimes associated with redesign.
Chapter 12 describes in detail how to use eye tracking data to identify the sources of usability issues. The usability issues that eye tracking can help explain typically originate in the suboptimal overall interface layout, specific element placement, graphic treatment, affordances, labeling, and messaging. Depending on what in the interface caused the problem, you could recommend, for example, increasing the visual prominence of the “invisible” element, improving its location, changing the wording, enhancing affordances, or decreasing page clutter.
Case Study: Website of a Professional Organization
Why Eye Tracking?
After the website of the American Society of Clinical Oncology (ASCO.org) was redesigned with the goal of being more user-friendly, my team was asked to validate the new design against the original design.1 The objectives of the study were to assess whether the redesigned ASCO homepage would make it easier for users to locate the right starting points for key tasks and to help determine any potential areas for improvement within the proposed redesign. The tasks included, for example, finding information on the FDA drug approvals, joining the organization, and locating a list of upcoming conferences.
The users of the site are oncologists, who are fairly difficult to recruit, especially in high numbers. Conducting the study at their annual conference was our best chance of completing the research in a timely manner. The challenge was to design the sessions to last no longer than 20 minutes so as not to take the participants away from the conference activities.
In case user performance, which was measured by task success and time on task, was not optimal, we needed to understand the “why” so that we could come up with specific recommendations. The short session length offered limited opportunity for follow-up discussion, and the annual occurrence of the conference meant we would not have the chance at another such study until the following year. Since expediency was doubly important, we decided to use eye tracking to gather information on the process that resulted in the participants’ selections on the homepage.
How Eye Tracking Contributed to the Research
In this study, not only did we detect and quantify the differences in accuracy and efficiency between the two designs, but we were also able to explain what contributed to these differences. Understanding the reasons behind hesitations and incorrect clicks allowed us to make informed recommendations for how to improve the new design.
Eye Tracking Explained Differences in Accuracy
As expected, there were several tasks in which the two designs differed significantly in terms of accuracy. For example, when participants were looking for ASCO position statements, all correctly selected the Policy & Practice link in the original design, but no one selected the correct link (Practice Resources) on the redesigned homepage. Eye tracking data were used to understand how this disparity in task success (100% vs. 0%) happened.
The