After so many shoddy sites, pop-up windows and forced registrations, the truth is that if people don't find your website easy to use, they won't come back. Worse, they'll tell their friends just how clueless you are.
The answer is, of course, to design everything around the needs of your users. We've known this for years, but there's still resistance to even the most basic usability testing.
"The site makes sense to me," designers will say. "I don't need to test it with other people." Ah, but you're very different to your users, which means it's dangerous to assume they'll use a website in the same way as you – particularly if you've just spent months building it and learning all its quirks.
Another excuse people use for not testing is: "We can just use focus groups/market research." But we're talking about two very different things here. Market research is really about "How can I attract customers?". It focuses on people's reactions to a particular marketing or brand approach.
Usability testing asks: "How can I make it easy for customers once they're here?" This touches upon people's emotional responses, but it's more about seeing whether people can use something than whether they like it.
"But surely, usability is just common sense," you might go on to argue. This attitude is understandable to an extent – well-designed systems often do look very simple and it's tempting to conclude that making things easy is, well, easy. Unfortunately, the hard part is everything leading up to the simple solution. Try just one usability test and you'll be amazed at how a site that seemed sensible to you can cause problems for others.
One final line of argument we often encounter against testing is: "It's just too expensive!" Thankfully, nowadays, this is only the case for large, formal usability testing. Sometimes multiple rounds of testing and teams of experts are entirely appropriate, but more and more people are turning to 'guerrilla' usability testing for a quick, cheap insight into how to make their websites better. Here's how to do it.
Planning your tests
First you need to consider at what stage of development you want your site tested.
Running a usability test on an existing site can give you an excellent overview of how well it works and how it can be improved. This is what's known as a summative test.
However, usability testing is for life, not just for Christmas, so it's often worth testing sites as you're making them, too – studies show it's 100 times cheaper to fix problems during design than after launch.
This is called formative testing because it helps you refine your ideas as you go. It's an increasingly common approach and fits in particularly well with the Agile philosophy.
If you're testing an unfinished site you need to choose what bits to test – usually stuff you've just developed, or perhaps a prototype.
Lo-fi paper prototypes are great ways to test early drafts of your site. Either take wireframes, if you have any, or sketch and cut out the relevant sections. You can then rearrange them on a large A3 sheet and ask your participant to interact with it as if it were a real site: using a finger to represent clicks, speaking keyboard input out loud and so on. Although this approach requires a certain suspension of disbelief, participants are usually happy to adapt to this unusual form of test.
Paper prototypes are best suited to sites early in development. As you get closer to a solution, you'll want to test either what you've already coded or more substantial prototypes. For higher-fidelity prototypes, you can use specialist prototyping software such as Axure and iRise, or get stuck in with HTML.
At Clearleft we prototype in HTML, CSS and some jQuery, creating 'mid-fi' prototypes – good enough to be usable but rough enough to be quick and easy to create.
Quantitative or qualitative?
Quantitative tests measure numerical things such as task completion percentages. For these tests you need quite a few users, but the resultant statistics can help put a financial value on usability improvements.
Qualitative testing is more concerned with watching people use the system and learning how well they understand it. As such, it's suited to the guerrilla approach and formative testing, and it's what we're focusing on here.
Of course, these two extremes aren't mutually exclusive, and a well-designed test can have both quantitative and qualitative elements.
Once you've chosen what to test, you should write some scenarios for the test. Examples for, say, a car classified site might be:
"You're looking to buy a used estate car for the family and have £2,000 to spend."
"You want to read some user reviews of the Peugeot 307." "You want to know how much your J-reg Mercedes 190E is worth."
Make sure the scenarios reflect the user's overarching goals, not how they should do it. Although you know the 'right' way to answer these scenarios on the site, you want to see whether it's obvious to users. Your scenarios should also involve the most important functionality you have: there's no point testing the subtleties of a photo cropping interface if the user can't log in.
The complexity of your scenarios will dictate how many you can fit in a single test, but five or so is common. To check you're covering the right number, estimate how long your scenarios will take to walk through. As a rule of thumb, double the time it takes you to walk through them all, then double it again to cover admin and briefing time. If this comes to between 30 minutes and an hour, you've got it about right.
How many users should you test your site on? Even one user is better than none, but it's worth getting a few people in to eliminate freak results and catch all of the common problems. For a single round of testing, five users will find the majority of errors.
For a quantitative test, you may need 20. Pace yourself – usability testing is surprisingly hard work and you'll struggle to cope with more than four or five tests in a day. For guerrilla tests, try three in an afternoon or even just one over a lunch break.
It's important that you find test subjects similar to your intended user base. If you've created personas for your site, these should be your guide. If not, give some thought to your target audience, but don't just focus on demographics; it's actually more important to find people who have similar needs. For example, a taxi driver and a high-flying stockbroker might both need to check their bank balances on the move.
Finding the right people
You can find participants through friends, family, Twitter, Facebook and so on. Get the word out and include some basic screener questions to filter out those who don't meet your criteria.
Recruitment can take a while: you'll need to find people, assess their suitability, schedule mutually acceptable times and agree payment, so allow a couple of hours per user. Alternatively, you can use a specialist recruiter – prices vary but you can expect to pay £30-£50 per participant.
If you do lots of testing, it might be useful to set up your own pool of users you can pick from. However, try to cast your net widely. It's not ideal to have one person testing the same site twice; they may remember things from the first test and skew your results.
Few people will give up their time for free, so you'll also need to consider incentives. Anything below £20 per hour is a bit miserly. You'll need a lot more to attract people from a particularly rare niche (GPs, mustachioed Canadian cat owners); it's probably worth using a specialist recruiter for these guys. It's up to you whether you offer cash, vouchers or any other incentive. If you're in a larger organisation, ask your finance team what's easiest.
Once you get started, you'll be too busy watching to takes notes by hand, so it's useful to record the screen. A video camera might suffice, but the quality won't be great and you won't be able to see the user's face. A better approach is to use screen recording software: see 'Usability testing software' (right). If you can't work out a way to record the session, try to find someone to take notes for you.
Also think about what kind of venue you want the testing to take place in. The most obvious choice is a spare room in your office, but they can be unfamiliar and intimidating places for the public. The guerrilla alternative is to get your laptop out and scoot round to a more amenable venue.
The ideal environment is as close to the user's natural habitat as possible, so you get the chance to see other things about their set-up. Do they have passwords scribbled on Post-it notes? Do they have to ask their son to help install software? These can have important implications for your designs.
Of course, it can be very hard to arrange tests on the user's home turf, so a quiet coffee shop can be a good neutral venue – but scout around first. It'll be hard to focus if baristas are shouting espresso orders within earshot.
The day of the test
Set up early and check you've brought the right cables (including a charger) and have all your paperwork ready. Once the user arrives, greet them, offer them a drink and generally make them feel at home. People might be nervous, so it's part of your job to make sure they relax.
A good way to start is to explain that you're not testing them, just the website, so they can't do anything wrong. It's often best to pay their incentive in advance too, so they don't feel like they have to do well to earn their cash.
While you're doing this, you'll probably want to go through some housekeeping. If you have a non-disclosure agreement, ask them to sign it. You should also ask permission if you're recording, and explain that they can opt out of the test at any point, and omit anything with which they feel uncomfortable.
Finally, explain what's known as 'thinkaloud'. This simply means that you ask your participant to talk you through what they're thinking during the test. This is a really useful way of learning how they believe the site works and behaves – what's known as a 'mental model' of the site. Then turn on your recording software, open the site in a browser (of the user's choice, ideally), give them their scenario and go!
As people familiarise themselves with the site, they'll often go quiet. That's fine, but you might wish to prompt them occasionally to explain what they're thinking. Good questions are "So what's going through your head right now?", "What do you think this page does?" or "What are your reactions?".
Try to avoid subjective or leading questions such as "Do you like this?" or "Does this button need to be bigger?". It's almost always better to ask open, probing questions starting with 'why', what' or 'how'. These are harder to answer in subjective terms and give users the chance to express how they've understood the system.
Dealing with questions
Sometimes, your participant will ask you a question. It's natural to want to help, but this can confuse your findings, so you need to encourage them to find their own solutions.
One approach is to ask the question back. If they say, "What does this button do?", reply with "What do you think it does?". If they're persistently asking questions, politely explain that you're interested in seeing how they solve these problems themselves, but you'll be happy to answer any outstanding questions at the end of the session.
Similarly, there will undoubtedly be moments where your user gets completely stuck. Sometimes you can learn a lot by how people try to rescue seemingly lost causes, so don't intervene straight away, but don't leave them struggling for too long either. Ask their opinions if they've sunk deep into thought, and if they're still nowhere near the right path, make a note and then step in and help them.
Running your first test can be quite an experience. Your instinct will be to yell out in frustration when the user overlooks your beautifully crafted navigation and goes in completely the wrong direction. Resist this urge.
Just as fitness instructors will tell you that pain is weakness leaving the body, so usability experts will remind you that you're finding out how to make your website better. It can be a painful process, but testing will make a big difference to the people who use the site.
Some participants will be more 'useful' than most, uncovering dozens of issues, while others will breeze effortlessly through the test. Some you'll barely be able to get a word out of; others you won't be able to shut up. This is one reason why you generally want to test with a few people, but even the quietest participants will teach you something – probably hidden somewhere in the recordings.
After the test
You've got your users to come in, the tests went well – now what? Analysis is the most important phase of testing. Having the most well-executed tests with the most interesting and insightful users won't matter if you can't turn them into sensible findings and recommendations.
First, write down everything you remember from the tests while it's still fresh in your mind. If you have a number of tests booked in a row, gaps in between sessions are perfect.
At this stage, just try to capture the most obvious issues; the ones that made you think "Why didn't I see that sooner?". With any luck, there'll be a few of these. To do the detailed analysis you'll need to set aside some quiet time later, get those headphones on and run through your videos.
The main things to look for are moments when the user struggled to use the site productively. These could range from simply overlooking a checkbox to times of complete and utter confusion. Whenever you find one of these moments, pause, scribble it down and then resume playback.
As you review how your participant used the site, also pay attention to what they said. Listening is one of the best ways to uncover mental models, which may be surprisingly different from how the site actually works. This is usually an indication that your design isn't quite right, although for some larger sites a user may not need to understand all the intricacies. Just ascertain whether they understand enough to use the site well and mark down any issues with the others.
One word of caution. Although it's important to listen, there are times when you should also take users' comments with a pinch of salt. What they do is normally more important. Ignore any comments such as, "I understand this, but my mum wouldn't." You're not testing their mum.
Similarly, you can guarantee that no matter how wonderful your site, someone will comment that they don't like the colours or they wish a particular corner was rounded. Don't worry too much about this subjective stuff unless it becomes a clear, repeated pattern.
If you have time, it can be really valuable to watch the session a second time, looking for anything you missed on the first run-through.
This is also a good opportunity to refocus your attention and take a look at your own technique. Did you ask any leading questions? Should you have allocated longer for a particular task? Chalk it up to experience and make a mental note for next time.
Once you have a list of all the issues, it's time to perform some triage on them. You'll probably have a couple of big issues and dozens of little ones. One good way to sift through these is to score the issues by how important they are and how easy they are to fix.
Your tests should give you enough information to answer the first question; just look at how much they affected the overall usability of the site. You'll probably need to liaise with your developers (or put on your developer pants, if it's just you) to answer the second question.
Some issues will simply be too big to fix, while some are worth throwing away weeks of work to make sure you get them right. Choosing which are important and which are acceptable casualties of time and budget is a fine art. Again, this is something you'll get better at with experience.
You'll probably be asked to produce a report for your boss or client. Although there are a number of different ways to do this, it's usually worth presenting your findings in person. Avoid bullet- itis, and consider playing some video highlights too: they can be extremely persuasive.
If you need to leave a summary of your findings behind, create a separate one-page document. Hefty usability reports have a tendency to sit, unread, on managers' desks.
The best usability testing is repeated and iterative. Although one round is better than none, two rounds is better still: the more people you can test the site with, the better it'll be. For instance, you might want to do some formative testing early on, make changes and refine your designs, then follow up nearer the end with a round of summative testing.
Practically, budget and time constraints will limit the amount of testing you can do, but by using the guerrilla approach you maximise your chances of squeezing bits in where needed.
In the end, the user-centred approach is as much about mindset as process: putting users right at the heart of everything you do is a big change for many businesses.
Reshaping the culture of a business (even your own) takes a long time, but guerrilla usability testing is a great first step to introducing user-centred thinking. Sure, it takes a bit more effort than just making websites you think will work, but the results speak for themselves.
Your clients will be happy, your users will be happy and ultimately you'll be helping to make the web a better place for everyone.
First published in .Net magazine, Issue 182
Now read 20 websites that changed the world