In the next few posts I am going to review a few tools we employ in the design process to evaluate designs. While focused inspections can't completely substitute for usability testing with live customers they can be used to predict many usability problems early in the development process when it is easier to make changes.
The three techniques I'll review in different posts are:
- Claims Analysis
- Heuristic Evaluation
- Cognitive Walkthrough
In a claims analysis several individuals, representing different points of view, identify positive and negative aspects of a design that may affect usability. The participants can represent designers, subject matter experts, quality analysts, developers, and product management.
A "claim" is a hypothesis about the effect of a feature, or aspect of a feature, on the user and their goals. Claims may be open or made against a set list of the functional requirements or design goals.
As an example there was a recent effort to develop a graphic standard for the in-canvas pointers (cursors)
Here were the guidelines developed for this effort:
- Avoid visual clutter. Too much detail can distract the users or make the image illegible.
- Re-use images for similar methods (do not use the same image for different functions)
- Use platform standards whenever possible. E.g. Ban symbol or hourglass. Match Autodesk AEC standards where applicable.
- Keep sizes consistent
- Minimize obscuring of the canvas.
- The action portion (hotspot) of the pointer must be clear and predictable
- The pointer must be legible at all times (have sufficient contrast)
- Colored backgrounds
- Over model elements
In the claims analysis different designs were evaluated and the above list served as a basis for which claims could be made. In one design an individual could make a positive claim that a pointer was more legible than another. Another might make a negative claim that a design looked too much like pointer used for another function with a different purpose.
This technique offers a simple framework for describing how a feature supports or frustrates customer activities and goals. It also focus feedback in a structured way that can direct future design iteration.
This process could serve just about any discipline. Has anyone used a similar technique during a design pin-up? Next I'll discuss Heuristic Evaluation.