Conducting Usability Tests

Printed Page 348-353

Conducting Usability Tests

Usability testing draws on many of the same principles as usability evaluations. For example, in a test, you start by determining what you want to learn. You choose test participants carefully, and you repeat the test with many participants. You change the draft and retest with still more participants. You record what you have learned.

The big differences between usability evaluation and usability testing are that testing always involves real users (or people who match the characteristics of real users) carrying out real tasks, often takes place in a specialized lab, is recorded using more sophisticated media, and is documented in more formal reports that are distributed to more people.

This section covers four topics:

THE BASIC PRINCIPLES OF USABILITY TESTING

Three basic principles underlie usability testing:

PREPARING FOR A USABILITY TEST

Usability testing requires careful planning. According to usability specialist Laurie Kantner (1994), planning accounts for one-half to three-quarters of the time devoted to testing. In planning a usability test, you must complete eight main tasks:

Read more about your audience’s needs in Ch. 5.

image
Figure 13.3 A Usability Lab
The two people in the foreground are in the observation room, where they are monitoring the woman performing the usability test in the testing room.
From Gwinnett Business Journal, by permission of Tillman, Allen, Greer.

For information about proposals, see Ch. 16.

CONDUCTING A USABILITY TEST

The testing team has to plan the test carefully and stay organized. Typically, the team creates a checklist and a schedule for the test day, specifying every task that every person, including the test participant, is to carry out. Conducting the test includes interacting with the test participant both during the formal test and later, during a debriefing session.

Interacting with the Test Participant Among the most popular techniques for eliciting information from a test participant is the think-aloud test, in which the participant says aloud what he or she is thinking while using a document or a website. Consider the earlier example of FloorTraxx software for designing custom floors. In planning to test the software, you would first create a set of tasks for the participant to carry out:

As the participant carries out each task, he or she thinks aloud about the process. Because this process might make the test participant feel awkward, the test administrator might demonstrate the process at the beginning of the session by thinking aloud while using one of the features on a cell phone or finding and using an app on a tablet.

While the test participant thinks aloud, a note taker records anything that is confusing and any point at which the test participant is not sure about what to do. If the test participant gets stuck, the administrator asks a leading question, such as “Where do you think that function might be located?” or “What did you expect to see when you clicked that link?” Questions should not take the user’s knowledge for granted or embarrass the test participant for failing a task. For example, “Why didn’t you click the Calculate button?” assumes that the user should have seen the button and should have known how to use it.

In addition, questions should not bias the test participant. When testers ask a participant a question, they should try not to reveal the answer they want. They should not say, “Well, that part of the test was pretty easy, wasn’t it?” Regardless of whether the participant thought it was simple or difficult, his or her impulse will be to answer yes. Usability specialists Joseph S. Dumas and Janice Redish recommend using neutral phrasing, such as “How was it performing that procedure?” or “Did you find that procedure easy or difficult?” (1999). In responding to questions, testers should be indirect. If the participant asks, “Should I press ‘Enter’ now?” they might respond, “Do you think you should?” or “I’d like to see you decide.”

To ensure that the test stays on schedule and is completed on time, the test administrator should set a time limit for each task. If the test participant cannot complete the task in the allotted time, the administrator should move on to the next task.

ETHICS NOTE

UNDERSTANDING THE ETHICS OF INFORMED CONSENT

For legal and ethical reasons, organizations that conduct usability testing—especially tests that involve recording the test participant’s behavior—abide by the principle of informed consent. Informed consent means that the organization fully informs the participant of the conditions under which the test will be held, as well as how the results of the test will be used. Only if the participant gives his or her consent, in writing, will the test occur.

When you obtain informed consent for tests that involve recording, be sure to do the following six things:

  • Explain that the test participant can leave at any time and can report any discomfort to the testing team at any time, at which point the team will stop the test.
  • Explain that a video camera will be used and, before the recording begins, ask for permission to record the test participant.
  • Explain the purpose of the recording and the uses to which it will be put. If, for example, the recording might be used later in advertising, the test participant must be informed of this.
  • Explain who will have access to the recording and where it might be shown. A participant might object to having the recording shown at a professional conference, for example.
  • Explain how the test participant’s identity will be disguised—if at all—if the recording is shown publicly.
  • Explain that the test participant will have the opportunity to hear or view the recording and then change his or her mind about how it might be used.

Debriefing the Test Participant After the test, testers usually have questions about the test participant’s actions. For this reason, they debrief the participant in an interview. The debriefing is critically important, for once the participant walks out the door, it is difficult and expensive to ask any further questions, and the participant likely will have forgotten the details. Consequently, the debriefing can take as long as the test itself did.

While the participant fills out a posttest questionnaire, the test team quickly looks through the data log and notes the most important areas to investigate. Their purpose in debriefing is to obtain as much information as possible about what occurred during the test; their purpose is not to think of ways of redesigning the product to prevent future problems. Usability specialists Jeffrey Rubin and Dana Chisnell (2008) suggest beginning the debriefing with a neutral question, such as “So, what did you think?” This kind of question encourages the participant to start off with an important suggestion or impression. During the debriefing session, testers probe high-level concerns before getting to the smaller details. They try not to get sidetracked by a minor problem.

INTERPRETING AND REPORTING THE DATA FROM A USABILITY TEST

After a usability test, testers have a great deal of data, including notes, questionnaires, and videos. Turning that data into useful information involves three steps:

Although usability testing might seem extremely expensive and difficult, testers who are methodical, open-minded, and curious about how people use their documents or websites find that it is the least-expensive and most-effective way to improve quality.