Usability Evaluation assesses the extent to which an interactive system is easy and pleasant to use.
- Formative evaluation: when conducted early on in the design process with low fidelity prototypes – this evaluation required the designer to collect the data (e.g. time to complete the task, clicks, etc…).
- Summative evaluation: when conducted with high fidelity prototypes or a near final interface – this evaluation might produce data on how the user interacted with the system (e.g. log data)
- Low fidelity prototypes require testing in a controlled environment (e.g. a lab)
- High Fidelity prototype can be tested in the wild (e.g. the user’s phone or a kiosk)
We will be able to ascertain if the design is efficient by evaluating various task completion measures. These include time to completion of the task, number of clicks, or number of errors while performing a task.
Notice that we can infer learnability and memorability by using some of the same measures I just mentioned.
Learnability refers to how easy it is to complete a task successfully. We can get an objective measure of this by looking at the data for number of clicks to complete a task, or amount of time to complete a task, and then compare these to expert performance.
We can measure amount of time or number of clicks to complete a task over repeated trials to get a measure of memorability.
We also need to have indicators of the subjective user satisfaction while executing the task.
These can be both cognitive or emotional aspects of the task completion. We are going to refer to cognitive measures as those that relate to the mental effort it required to complete the task. For example, were the steps required to complete the task intuitive?
Here’s a sample of the kind of data matrix you might collect after a usability session. This is not exhaustive. It’s just an example.
It’s important to remember that the usability measures we just discussed must be considered in relation to either the values rate using the status quo interface, right, the current practices of the user.
Or if were designing a completely new interaction, we can compare the user’s values to some other objective measures of success. For example, the values that are obtained when the design team, you might consider these people experts, use a novel design.
Advance evaluation techniques are:
- Heuristic Evaluation
- Cognitive walk trough
Once the evaluation data is collected and analyzed, the designer is in a position to iterate on the design. This may lead to another round of alternative designs. It might lead to prototype building and more evaluation. When do you stop? Well, one rule of thumb is that you stop when you have met your design objectives. And this translates to an evaluation cycle that shows that the user can interact with your design in an effortless and enjoyable manner.
To learn more, check on:
Usability Evaluation 101 by Usability.Gov
Interaction Design – Chapter 15 – Usability Evaluation.
WQUsability – More than Easy to use.
Measuring Usability: Are Effectiveness, Efficiency, and Satisfaction Really Correlated? by CHI 2000.
Usability 101 – Introduction to Usability – by Nielsen Norman Group