Let’s take a deep dive into Heuristic Evaluations with this 7 minute article; In here you’ll be exposed to how you can effectively conduct a heuristic evaluation for your products, how to use heuristic evaluation the right way, at the right time and how heuristic evaluation differs from user testing.
- Heuristic Evaluation in Product Design: The Not-too-talked-about concept
- Heuristic evaluation vs. User testing: There is a big difference
- How to effectively Conduct a Heuristic Evaluation on your Product Designs
- Using Heuristic Evaluations for the right purpose
Ready to explore with us?
Heuristic Evaluation in Product Design
A heuristic evaluation is a way to test whether a website is user friendly. In other words, it tests the site’s usability. Unlike user-testing, where the site (or prototype) is evaluated by users, in a heuristic evaluation the site is evaluated by usability experts. It is sometimes referred to as an “expert review”.
As it applies to Product design, a heuristic evaluation is a method of inspecting and evaluating the usability of a website, or product.
The process basically involves one or more experts evaluating how well a product complies to a set of heuristics which defines its usability. Often the experts leverage a score-card, or numeric-based scoring (weighted to the impact on usability) for each heuristic.
Heuristics can be thought of as rules of thumb. A heuristic evaluation or expert review of a web or mobile site is based on a set of predetermined heuristics or qualitative guidelines.
The value of heuristic evaluations is prevalent in early stage design and development, and in smaller organizations which may not be equipped with the budget or resources to support a robust user-testing program but still need to validate design decisions, and ensure a good user experience.
Heuristic evaluation vs. User testing: There is a big difference
A heuristic evaluation can be used at any stage of a site’s development, including in the early stages when developing paper prototypes. Nielsen recommends using it in conjunction with user testing. Administering the heuristic evaluation before user testing allows many of the ‘obvious’ errors to be caught before engaging in time-consuming and expensive user testing. Both will largely uncover different insights and errors to be corrected.
Ideally, you would want to do both at several different stages of development. As the more obvious problems are discovered and solved the less-odious ones will be easier to spot and correct.
How to Effectively Conduct a Heuristic Evaluation on your Product Designs
Establish an appropriate list of heuristics.
You can choose Nielsen and Molich's 10 heuristics or another set, such as Ben Shneiderman’s 8 golden rules as inspiration and stepping stones. Make sure to combine them with other relevant design guidelines and market research.
Select your evaluators.
Make sure to carefully choose your evaluators. Your evaluators should not be your end users. They should typically be usability experts and preferably with domain expertise in the industry type that your product is in. For example, an evaluator investigating a Point-of-Sale system for the restaurant industry should have at least a general understanding of restaurant operations.
Brief your evaluators so they know exactly what they are meant to do and cover during their evaluation.
The briefing session should be standardized to ensure the evaluators receive the same instructions; otherwise, you may bias their evaluation. Within this brief, you may wish to ask the evaluators to focus on a selection of tasks, but sometimes they may state which ones they will cover based on their experience and expertise.
First evaluation phase.
The first evaluation generally takes around two hours, depending on the nature and complexity of your product. The evaluators will use the product freely to gain a feel for the methods of interaction and the scope. They will then identify specific elements that they want to evaluate.
Second evaluation phase.
In the second evaluation phase, the evaluators will carry out another run-through, whilst applying the chosen heuristics to the elements identified during the first phase. The evaluators would focus on individual elements and look at how well they fit in the overall design.
The evaluators must either record problems themselves or you should record them as they carry out their various tasks to track any problems they encounter. Be sure to ask the evaluators to be as detailed and specific as possible when recording problems.
The debriefing session involves collaboration between the different evaluators to collate their findings and establish a complete list of problems. They should then be encouraged to suggest potential solutions for these problems based on the heuristics.
In general, the more evaluators you have, the more usability issues you will unearth, especially when the evaluators have different skill sets. However, Jakob Nielsen suggests that between three and five evaluators is sufficient. With five evaluators, you should be able to identify up to 75% of all issues. While increasing the number of evaluators will help you find more issues, it may not be worth the time and effort.
Using Heuristic Evaluations for the right purpose
Like any suggested method in research and design, there are both pros and cons in the usability inspection method of heuristic evaluation. Let’s examine a few of them:
Pros of Heuristic Evaluation
i. Heuristics can help highlight potential usability issues early in the design process.
ii. It is a fast and inexpensive tool compared with other methods involving real users.
Cons of Heuristic Evaluation
i. Heuristic evaluation depends on the knowledge and expertise of the evaluators. Training the evaluators or hiring external evaluators might increase the time and money required for conducting the evaluation.
ii. Heuristic evaluation is based on assumptions about what “good” usability is. As heuristics are based on research, this is often true. However, the evaluations are no substitute for testing with real users. These are, as the name suggests, only guidelines, and not rules that are set in stone.
iii. Heuristic evaluation can end up giving false alarms. In their article, “Usability testing vs. heuristic evaluation: A head-to-head comparison,” Robert Bailey, Robert Allan and P. Raiello found that 43% of 'problems' identified by experimental heuristic evaluations were not actually problems. Furthermore, evaluators could only identify 21% of genuine usability problems in comparison with usability testing.
Want to receive regular updates on Product design, Web design, UI/UX?
SUBSCRIBE to our newsletter Today!