Heuristic Evaluation is an important part of any interface evaluation and re-design exercise. This discount usability testing method can impart important information to help mitigate usability issues.
There is ongoing debate about the effectiveness of a Heuristic Evaluation. Just finding out a large number of problems with the interface doesn’t make this method any superior or more effective. The quality of the problems themselves is important.
The presentation tries to analyze the process and suggest some possible solutions to mitigate the problems.
2. Heuristic Evaluation – a journey
Heuristic Evaluation is an important part of any interface evaluation and re-design exercise. This discount usability testing method can impart important information to help mitigate
usability issues.
Scenario
An interaction designer is carrying out a Heuristic Evaluation. The designer has the interface and Nielsen’s 10 principles at his disposal.
OBSERVATION
Observes the interface. This could be a single webpage or a complete website. Navigates through the website and
uses the functionalities first-hand. In the absence of a working prototype, resigns to visual inspection alone.
EVALUATION & CATEGORIZATION
Figures out the usability problems and categorizes them. This exercise can span across several un-related
webpages. Marks each problem against one of the 10 principles and further categorizes them w.r.t. the severity of
the issue.
DOCUMENTATION
Documents the problems and prepares a usability report. Describes each issue. Places screenshots pointing out
the specific occurrence(s) of the issue. Recommends solutions to these specific problems.
IMPLEMENTATION
Uses the points to bring about improvements in the redesign phase. Checks out recommended solutions previously
noted and also brings about improvements to other usability problems throughout the site.
3. Heuristic Evaluation – problems and possible solutions
There is ongoing debate about the effectiveness of a Heuristic Evaluation. Just finding out a large number of problems with the interface doesn’t make this method any superior or more
effective. The quality of the problems themselves is important.
What are the problems…
Observes the interface. This could be a single webpage or a
complete website. Navigates through the website and uses the
functionalities first-hand. In the absence of a working
prototype, resigns to visual inspection alone.
Figures out the usability problems and categorizes them. This
exercise can span across several un-related webpages. Marks
each problem against one of the 10 principles and further
categorizes them w.r.t. the severity of the issue.
Documents the problems and prepares a usability report.
Describes each issue. Places screenshots pointing out the
specific occurrence(s) of the issue. Recommends solutions to
these specific problems.
Uses the points to bring about improvements in the redesign
phase. Checks out recommended solutions previously noted
and also brings about improvements to other usability
problems throughout the site.
• Usability problems occur in context and not
sporadically. Evaluating discrete pages will
highlight issues in isolation, issues that might
not prop up at a task-flow level.
• It is tedious process to scan through a full
website to pick out pointers from various pages.
On the other hand, without a comprehensive
look a the full website, it would not be possible
to point out problems in entirety.
What can be improved…
• Choose screen-flows based on the primary
task-flows in the website. Rather, pick out the
top 3 important flows and then evaluate each
screen.
• Have multiple evaluators for the portal. That
will give more chances to finding out different
problems within a short time-span. It will also
allow for scanning more of the website than a
single evaluator can.
4. Heuristic Evaluation – problems and possible solutions
There is ongoing debate about the effectiveness of a Heuristic Evaluation. Just finding out a large number of problems with the interface doesn’t make this method any superior or more
effective. The quality of the problems themselves is important.
What are the problems…
Observes the interface. This could be a single webpage or a
complete website. Navigates through the website and uses the
functionalities first-hand. In the absence of a working
prototype, resigns to visual inspection alone.
What can be improved…
• It is not possible to find out find out domainspecific usability problems without domain
knowledge.
• Have SMEs set the heuristics for the BUs
they hold expertise in. Being an expert in
both usability and the business domain will
help the evaluator find out stark problems
and not only the minor ones.
• Even in simple evaluations, the number of
issues pointed out will vary depending on the
capability and experience of the evaluator.
Figures out the usability problems and categorizes them. This
exercise can span across several un-related webpages. Marks
each problem against one of the 10 principles and further
categorizes them w.r.t. the severity of the issue.
Documents the problems and prepares a usability report.
Describes each issue. Places screenshots pointing out the
specific occurrence(s) of the issue. Recommends solutions to
these specific problems.
Uses the points to bring about improvements in the redesign
phase. Checks out recommended solutions previously noted
and also brings about improvements to other usability
problems throughout the site.
• A simple heuristic evaluation based on
Nielsen’s 10 principles is not enough to
categorize all usability problems.
• Not all findings suggest usability problems. The
actual number (quantitative data) of usability
problems detected through user research might
not coincide with the evaluation findings.
• Allocate evaluators based on their evaluation
experience index, a standard metric
indicating the capability of the evaluator.
• Have different evaluation parameters
(heuristics) based on the type of
interface/business. This categorization can
be standardized. Also use other types of
evaluations e.g.:
• Expert Evaluations
• Connell & Hammond's 30 Usability
Principles
• Gerhardt-Powals 10 Cognitive Engineering
Principles
• Bastien and Scapin’s set of 18 Ergonomic
criteria
• Smith & Mosier's 944 guidelines for the
design of user-interfaces
• Conduct user testing concurrently with
heuristic evaluations. This will make sure no
false alarms (testing says no problem) or
misses (testing finds out an undetected
problem) are left unaccounted for.
5. Heuristic Evaluation – problems and possible solutions
There is ongoing debate about the effectiveness of a Heuristic Evaluation. Just finding out a large number of problems with the interface doesn’t make this method any superior or more
effective. The quality of the problems themselves is important.
What are the problems…
Observes the interface. This could be a single webpage or a
complete website. Navigates through the website and uses the
functionalities first-hand. In the absence of a working
prototype, resigns to visual inspection alone.
What can be improved…
• Sometimes it becomes difficult to relate to
snippets or screenshots. It becomes tough to
relate to these findings.
• Have hyperlinks to the respective pages in
the portal as the screenshots if possible. This
will allow for instant display of the problem in
a better context.
• Just pointing out the number of problems in
terms of their severity or Nielsen’s 10 principles
doesn’t give enough information on where to
focus the redesign on.
Figures out the usability problems and categorizes them. This
exercise can span across several un-related webpages. Marks
each problem against one of the 10 principles and further
categorizes them w.r.t. the severity of the issue.
Documents the problems and prepares a usability report.
Describes each issue. Places screenshots pointing out the
specific occurrence(s) of the issue. Recommends solutions to
these specific problems.
Uses the points to bring about improvements in the redesign
phase. Checks out recommended solutions previously noted
and also brings about improvements to other usability
problems throughout the site.
• Have more specific contextual
pointers/parameters like content, navigation,
branding, adherence to business flows etc.
and rate the interface on each of these
parameters so that it can provide an at-aglance idea of the key affected areas.
6. Heuristic Evaluation – problems and possible solutions
There is ongoing debate about the effectiveness of a Heuristic Evaluation. Just finding out a large number of problems with the interface doesn’t make this method any superior or more
effective. The quality of the problems themselves is important.
What are the problems…
Observes the interface. This could be a single webpage or a
complete website. Navigates through the website and uses the
functionalities first-hand. In the absence of a working
prototype, resigns to visual inspection alone.
What can be improved…
• The evaluation serves only as an appendix and
not as a business document. It cannot be used
as a document to base redesign time estimates
upon.
• Give weightage to issues and form a CI-wide
issue-rating scale for the UI to help in time
estimations for redesign.
• There is no guarantee that the evaluation points
are actually being rectified through the redesign.
This is probable since the evaluator might not
be redesigning the interface.
Figures out the usability problems and categorizes them. This
exercise can span across several un-related webpages. Marks
each problem against one of the 10 principles and further
categorizes them w.r.t. the severity of the issue.
Documents the problems and prepares a usability report.
Describes each issue. Places screenshots pointing out the
specific occurrence(s) of the issue. Recommends solutions to
these specific problems.
Uses the points to bring about improvements in the redesign
phase. Checks out recommended solutions previously noted
and also brings about improvements to other usability
problems throughout the site.
• Check the effectiveness of the redesign
through a next level of evaluation by another
evaluator(s).
• More of then than not, here is no evaluation of
the evaluation. The pointers are not checked
before being handed out.
• Have SMEs holding domain knowledge
review and validate the evaluation.
7. Extended Strategies
• 1+ novice and 1+ expert for evaluation. The expert can even set the parameters or do a review of the evaluation.
• Test the evaluators against real usability tests. See the accuracy of the evaluators and rate them. Have this exercise/rating done every 6 months.
• Provide weightage to the parameters on which the ratings are being done. These weightages will be provided post discussion with multiple people (stakeholders, SME, usability expert).
• When a final rating is arrived at, use this rating to communicate the effort or complexity required for the redesign.