Deployment and Release

Don’t guess, test

  • Usability Benchmark Study: Now that everything is functioning, understand how usable your website or software is by having a representative set of users attempt tasks. Collect metrics and use confidence intervals to generate a reliable benchmark. Use standardized questionnaires (after the task and at the end of the study) where possible. This can be done in a lab-environment or remotely.
  • Unmoderated Remote Usability Testing : With a live website you can have users attempt the same tasks you identified early in the Top Task analysis and in the formative design stage. You can record clicks and even have an entire video that shows where users are getting stuck without you being there.
  • Comparative Benchmark Study : How difficult are the same tasks on the competitive applications you defined in the requirement stages? Recruit users, use core metrics likecompletion rates, time and task-difficulty and see the strengths and weaknesses of your website.  Sometimes the best comparable is a best in class website that provides a similar service in a different industry. If you’re selling mobile-service plans, consider comparing the checkout experience to DirecTV or Zappos.
  • A/B Testing : Don’t guess, test. Design and improvements don’t stop once you’re released; this is much easier in the web-based application word. Test forms, buttons, copy, images and prices. Don’t be afraid to test wild-card ideas.
  • Multivariate Testing: One-variable-at-a-time testing helps tweak the website but can take a long time if you want to test a lot and you will have no idea how two elements interact. For example, surprising things happen when you couple a lower price with a different product package (two variable interactions). You can multivariate test on a live website or simulate the experience in a development environment using attitudinal data instead of actual purchases.
  • Survey: Are users recommending your website or product? Do they trust it and find it appealing? Compare your scores to industry benchmarks and use standardized questions. Ask what users would improve and associate open-ended comments to quantitative data.