We’ve just completed the next step in our journey toward the new nottinghamshire.gov.uk website – usability testing of our initial three design concepts.
The three design prototypes challenged our assumptions about user behaviour and sought to discover which elements really help them complete visits to our site efficiently and satisfactorily.
Dr. Emily Webber and Dr. Vicky Shipp of The Insight Lab provide this detailed post on how we approached this phase of the build and testing.
Why do user testing?
When designing a website it is important to consider the different types of people who will be visiting it, and the type of experience they will have. A really effective way to do this is through user testing, where we can observe people carrying out set tasks on a website or prototype to understand what they do, and what they think. This will help to assess their reactions to the website and anything that they find particularly difficult to use, which will in turn help us to see how it could be improved.
User testing can be carried out on just one design, but the benefit of comparing different designs is that people will be able to notice which features they like or find easiest to use across a range of options, and we can observe which elements work well and those that don’t. This will help to identify the strengths and weaknesses of the different designs so that we can make a more informed decision about future designs.
To help with the development of the new Nottinghamshire County Council (NCC) website, The Insight Lab were asked to run user testing to evaluate three prototype interface designs.
These three in-browser prototypes were created especially for the testing, and included a range of different aesthetic and functional elements to allow for a rich experience, which aimed to mirror the use of a fully functional site. Subtle differences across the designs, such as the location and function of menus, the placement of key pieces of content, and the colours used, would allow us to see how this impacted on people’s performance and experiences and understand which elements allowed for more efficient and effective task completion.
Who did the testing and what did it involve?
Using the previously identified personas as a starting point, we recruited participants across a range of age groups, locations, and with different levels of computing experience. This ensured that the sample of people who took part in the study, and their feedback and performance, were representative of the actual users of the NCC website.
The individual testing sessions lasted just under an hour and participants were asked to use all three of the interfaces to complete a series of common tasks, such as finding out about school term dates, viewing local events, and reporting a pot hole. We made sure that the order in which people completed the tasks and used each interface were randomised so that the results wouldn’t be affected by people becoming familiar with them (we call these practice effects). In this way we could be sure that the results we got were as non-biased and as accurate as possible.
To allow us to understand exactly what people did, we used software to record what was happening on the screen – where people were clicking, hovering, and the route they took to complete each task. This software also captured a video of people’s reactions to the tasks via a webcam mounted on the monitor so that we could record when people were particularly confused or confident, and map these reactions to use of particular interface elements. We also recorded a variety of metrics such as how long it took people to complete the tasks, how many times they needed to click, and which navigation elements they used.
At the end of the testing sessions we asked participants to talk about their experiences, identify their preferred elements across the three designs, and fill in a quick survey that measured how usable they found each interface. By capturing this wide variety of data it would ensure rich and insightful results.
Making sense of the data
The sessions generated a lot of data, and we needed to spend time analysing it to make sense of it. This is one of the most enjoyable parts of our job as we begin to explore and uncover meaningful insights.
In this scenario we looked at the average task completion times across the interfaces, the usability score of each interface, and the different ways that people completed the tasks.
This not only allowed us to see which design performed the best overall, but also the different design components that people used and/or struggled with and any tasks that were more challenging than they should be.
What did we find out?
Through our analysis we found that although all three interfaces designs were rated as having above average usability, opinions about the preferred interface were varied. We were able to identify the elements that were frequently used and allowed for efficient task completion, or those that were particularly well received, and these should remain in the future design.
However, it does not mean that other interface elements should not be included, as these were still important to some of the users. What is key is combining these in ways that do not inhibit or distract other site visitors.
Our findings have produced a set of recommendations that NCC can use to ensure that future designs meet the various needs of people using the website. As the aim of the testing was not to choose one interface, but to understand the way people used different features of the designs, future designs are likely to involve incorporating aesthetic and navigational elements from each of the three prototypes.
The Insight Lab is an expert-led consultancy, implementing user-focused research methods to drive the design of digital products and services that are simple, efficient and a pleasure to use. Find out more about them on their website.