A wireframe is a visual representation or a mockup of an interface using only simple shapes. They’re void of any design elements such as colors, fonts or images and they’re used to communicate ideas and represent the layout of a website in the early stages of a project.
They are usually done before the design phase commencement to get the business approval on the structure of the design itself.
It is important that:
- Elements on the page have a good aspect ratio for the content they contain.
- The white-space should give elements room to breathe, and should never be so large that connected elements get lost.
- If the more than one header is shown, the headers adding relevant information should be large, whilst others should be either small (e.g. where the header is mostly implied by the content) or omitted (where the header is completely implied by the content).
- Vertical space is used wisely.
- As a rule of thumb, multi-line text and headers that repeat down the page should be left justified. Lone lines can be centered. With tabular data and forms, the left column can be right justified.
- To Avoid:
- Extraneous lines & ‘chartjunk’
- Unnecessarily repeated elements on the same page.
- Inconsistent layout choices.
- Information of minimal relevance to common tasks.
Usability Evaluation assesses the extent to which an interactive system is easy and pleasant to use.
It is generally defined as:
- Formative evaluation: when conducted early on in the design process with low fidelity prototypes – this evaluation required the designer to collect the data (e.g. time to complete the task, clicks, etc…).
- Summative evaluation: when conducted with high fidelity prototypes or a near final interface – this evaluation might produce data on how the user interacted with the system (e.g. log data)
The type of prototype affects the environment where the testing takes place:
- Low fidelity prototypes require testing in a controlled environment (e.g. a lab)
- High Fidelity prototype can be tested in the wild (e.g. the user’s phone or a kiosk)
A thorough evaluation requires that we consider if the design is usable. This means that we measure to what degree the goals of the task are met. This can be accomplished by collecting quantitative data in the form of questionnaires, or log data of the path the user traversed while completing the task. Or it can be qualitative data in the form of user interviews.
We will be able to ascertain if the design is efficient by evaluating various task completion measures. These include time to completion of the task, number of clicks, or number of errors while performing a task.
Notice that we can infer learnability and memorability by using some of the same measures I just mentioned.
Learnability refers to how easy it is to complete a task successfully. We can get an objective measure of this by looking at the data for number of clicks to complete a task, or amount of time to complete a task, and then compare these to expert performance.
Memorability refers to how easy it is to remember how to use a product, or more specifically, how to perform a given task on an interface after repeated trials.
We can measure amount of time or number of clicks to complete a task over repeated trials to get a measure of memorability.
We also need to have indicators of the subjective user satisfaction while executing the task.
These can be both cognitive or emotional aspects of the task completion. We are going to refer to cognitive measures as those that relate to the mental effort it required to complete the task. For example, were the steps required to complete the task intuitive?
For the emotional component, we want to have a sense of the feelings that the user experienced as she completed the task. These two might be correlated. It might be that a task that was unintuitive will lead to the user feeling frustrated.
Here’s a sample of the kind of data matrix you might collect after a usability session. This is not exhaustive. It’s just an example.
It’s important to remember that the usability measures we just discussed must be considered in relation to either the values rate using the status quo interface, right, the current practices of the user.
Or if were designing a completely new interaction, we can compare the user’s values to some other objective measures of success. For example, the values that are obtained when the design team, you might consider these people experts, use a novel design.
Advance evaluation techniques are:
- Heuristic Evaluation
- Cognitive walk trough
Once the evaluation data is collected and analyzed, the designer is in a position to iterate on the design. This may lead to another round of alternative designs. It might lead to prototype building and more evaluation. When do you stop? Well, one rule of thumb is that you stop when you have met your design objectives. And this translates to an evaluation cycle that shows that the user can interact with your design in an effortless and enjoyable manner.
To learn more, check on:
Usability Evaluation 101 by Usability.Gov
Interaction Design – Chapter 15 – Usability Evaluation.
WQUsability – More than Easy to use.
Measuring Usability: Are Effectiveness, Efficiency, and Satisfaction Really Correlated? by CHI 2000.
Usability 101 – Introduction to Usability – by Nielsen Norman Group
Low Fidelity Prototypes include:
Sketching – is a free-hand depiction of images related to the final design.
Storyboards – are common way to provide a narrative putting the design into context. It provides an opportunity to see how the user will engage with a given scenario. The below storyboard is from RuoCheng.me.
Card-Based prototype – allows us to see at the sequence of iterations the user might have with the designed interface.
Usability.com/Prototyping – includes an excellent graph by Tracy Lepore visually showing the evolution from sketch to design.
Lo-Fi vs. Hi-Fi Prototyping: how real does the real thing have to be? by Florian N. Egger on telenovo.com.
High-Fidelity vs. Low-Fidelity prototyping in Web Design and App Development – by Kim Doleatto on http://www.atlargeinc.com/
UX Recorder – Good for mobile user testing on iOS
Invision – Free design prototyping tool
Marvel App – Free version – prototyping tool for all type of devices.
Azure – the most used software.
Let’s start with two SEO myth: meta keywords are death and should not be used. Next, an heavy meta keywords optimized site is counterproductive for your ranking since the latest Google Penguin release. There are in fact rumors that also Bing used them to determine whether a site is a spam or a genuine site.
On the contrary, meta descriptions are crucial: they appear in the search results as the snippet description of the website/page. Each page should have its unique meta description: search engine consider duplicate tattle tags or meta description as bad form and can actually penalize the page.
If you have a responsive website, duplicate content displayed on across devices is a serious issue. Google recommend to add a canonical tags to all mobile optimized page, and an alternate tags to the respective web site pages. In this way, its search engine will easily determine and show the device-optimized page to the user client. You can also use canonical tags to discriminate yours from a partner’s content.
In recent time, rich snippets that enables the site to embellish their search results with add-ons, such as customer’s ratings have gained popularity. Although it is still unclear how they might affect your website ranking on search engine, recent studies prove that a user is more likely to engage with sites that have richest snippets.
Last but not least, you can claim Google authorship by linking you site with your Google + account.
At the end of each usability testing, you will collect several types of data, and report the finding in a spreadsheet as in the example below, where each response is ranked based on the calculated KPIs: (1) Success rates; (2) Task time; (3) Error rates; and (4) Satisfaction.
In example, you can use the following scale to report your findings, where you assign a value of: (1) 0 for all failed task; (2) 1 for all task that are successfully completed within 1 to 2 minutes; (1) 2 for all task that are successfully completed in 1 minute.
ANALYZING THE USABILITY TESTING RESULTS
||Avg. Response by Question
|Average Response by Segment
||Avg Seg. 1
||Avg Seg. 2
||Avg Seg. N
As you are reviewing the data, it is extremely important to highlight and prioritize any detected usability issue. To help differentiate, you should note the severity of the problems on a scale points as defined below:
|REPORTING SEVERITY LEVELS OF PROBLEMS
||This would mean severe business and usability impact. This is a showstopper: the user is unable to complete the task.
||This would mean a significant business and usability impact, but not necessary a showstopper: some users will be frustrated if the issue is not fixed. It is not a showstopper if the affected user segment has no, or low value for the business
||This would mean a potential business and usability impact. More impact analyse is needed. Not a showstopper.
||This would mean there is no an immediate business and usability impact. The users are annoyed, but this doesn’t keep them from completing the task
Your report should include your recommendations and UX/Business/Technological impact. Below the minimum required points:
|Provide user acknowledge form after submitting the form
||Easy to fix
||Fix this issue in the current release
|Redesign the Event section
||Time consuming task, not in the spring budget
||Fix this issue in the next available release
A selection of templates to be used for your reporting can be found at:
Formative testing is usually conducted during website development and is typically used to test a specific feature like the design of a button or the findability of a piece of specific content, etc…
For example: is the user able to find the email form and be able to subscribe to the newsletter? Or is the user able to find a piece of content given the labeling of the button?
Summative testing is conducted at the end of the development, the “Summation” of the development process. Summative testing is used to determine whether a website has successfully accomplished all of its goals.
Qualitative vs. Quantitative Testing
Related to, measuring, or measured by the quantity/quality of something rather than its quality/quantity.
In this presentation we analysis what UX mistakes were made on redesign BarclayCard Bespoke Offers website and what tips to use to improve conversion and drive sales.