User Experience Research: Evaluation

Usability Evaluation assesses the extent to which an interactive system is easy and pleasant to use.

It is generally defined as:
  • Formative evaluation: when conducted early on in the design process with low fidelity prototypes – this evaluation required the designer to collect the data (e.g. time to complete the task, clicks, etc…).
  •  Summative evaluation: when conducted with high fidelity prototypes or a near final interface – this evaluation might produce data on how the user interacted with the system (e.g. log data)
The type of prototype affects the environment where the testing takes place:
  • Low fidelity prototypes require testing in a controlled environment (e.g. a lab)
  • High Fidelity prototype can be tested in the wild (e.g. the user’s phone or a kiosk)
A thorough evaluation requires that we consider if the design is usable. This means that we measure to what degree the goals of the task are met. This can be accomplished by collecting quantitative data in the form of questionnaires, or log data of the path the user traversed while completing the task. Or it can be qualitative data in the form of user interviews.

We will be able to ascertain if the design is efficient by evaluating various task completion measures. These include time to completion of the task, number of clicks, or number of errors while performing a task.

Notice that we can infer learnability and memorability by using some of the same measures I just mentioned.

Learnability refers to how easy it is to complete a task successfully. We can get an objective measure of this by looking at the data for number of clicks to complete a task, or amount of time to complete a task, and then compare these to expert performance.

Memorability refers to how easy it is to remember how to use a product, or more specifically, how to perform a given task on an interface after repeated trials.

We can measure amount of time or number of clicks to complete a task over repeated trials to get a measure of memorability.

We also need to have indicators of the subjective user satisfaction while executing the task.

These can be both cognitive or emotional aspects of the task completion. We are going to refer to cognitive measures as those that relate to the mental effort it required to complete the task. For example, were the steps required to complete the task intuitive?

For the emotional component, we want to have a sense of the feelings that the user experienced as she completed the task. These two might be correlated. It might be that a task that was unintuitive will lead to the user feeling frustrated.

Here’s a sample of the kind of data matrix you might collect after a usability session. This is not exhaustive. It’s just an example.

It’s important to remember that the usability measures we just discussed must be considered in relation to either the values rate using the status quo interface, right, the current practices of the user.

Or if were designing a completely new interaction, we can compare the user’s values to some other objective measures of success. For example, the values that are obtained when the design team, you might consider these people experts, use a novel design.

Advance evaluation techniques are:

  1. Heuristic Evaluation
  2. Cognitive walk trough

Once the evaluation data is collected and analyzed, the designer is in a position to iterate on the design. This may lead to another round of alternative designs. It might lead to prototype building and more evaluation. When do you stop? Well, one rule of thumb is that you stop when you have met your design objectives. And this translates to an evaluation cycle that shows that the user can interact with your design in an effortless and enjoyable manner.

To learn more, check on:

Usability Evaluation 101 by Usability.Gov

Interaction Design – Chapter 15 – Usability Evaluation.

WQUsability – More than Easy to use.

Measuring Usability: Are Effectiveness, Efficiency, and Satisfaction Really Correlated? by CHI 2000.

Usability 101 – Introduction to Usability – by Nielsen Norman Group