Tuesday 20 September 2011

Evaluating Pelagios' usability

Hello!  I'm Mia, and I was drafted into the Pelagios project to run some usability testing on the 'graph explorer'. (These days I'm working on a PhD in Digital Humanities in the department of History at the Open University, but until quite recently I worked as an analyst/programmer and user experience designer, mostly in museums, and in early 2011 I completed City University London's MSc in Human-Computer Interaction).

There's a range of usability methods we could have used to evaluate Pelagios' usability, but the 'gold standard' is user testing (basically, showing the site to typical users and gathering their feedback as they complete set tasks). It requires more resources to set up and run the user tests than other usability techniques, but it's particularly useful for 'novel' interfaces like the Pelagios visualisation.  Other common methods are testing with paper prototypes (e.g. if you haven't got a working site), card sorting, or having experts review the site according to usability checklists (AKA 'heuristic evaluation').

Quite a bit of work goes into preparing and piloting user testing. Once I'd written a usability testing plan, I worked with Elton, Leif and Rainer to define the key audiences ('subject specialists' and 'non-specialists with an interest in the classics/ancient world' - specialist is a relative term in this context) and create a persona to represent the specific target audience we were going to test for, the snappily-titled 'non-specialist adults with an interest in the classics/ancient world'.  In addition to focusing our minds on the usability requirements of this audience and typical tasks they might undertake on the site, this persona would be used in future design processes to ensure the project delivers user-centred designs.

We also reviewed the available functionality and interfaces to design test tasks that would help us understand where improvements were needed to make the graph visualisation more useful to its audiences.  The trick is to write tasks that make sense to the test participants that will also lead them to use key areas of the site.  I also included an initial open-ended question to elicit overall impressions of the site, as it's a good way to gather feedback on the overall design and get a sense of what the participant thinks the scope of the site might be.

I also designed a short semi-structured interview - a set list of questions to ensure consistent data collection, with the flexibility to explore interesting issues that will provide insight into user requirements, expectations and mental models as they arise.  I included questions on other sites the participants use regularly for research or leisure, as these will give some idea of their expectations of other sites. It's helpful to order your questions so the easy ones are first, as it gives the participant a chance to relax and get used to the situation.  User tests use the 'think aloud' protocol, where the participant, well, thinks aloud, sharing the thoughts and questions that are running through their minds as they use the site and go through their tasks.

Meanwhile, Elton was recruiting participants - we were aiming to include people interested (but not yet specialist) in the Classics or ancient world, to match our target audience as closely as possible. Once I had the test tasks and interview in order, I wrote a short introductory script to read at the start of each testing session.  This script helps the tester remember to give everyone the same information so the tests are consistent and the participant has a positive experience.  I then ran a pilot test with a volunteer participant, including the interview and intro script - this is one of the most important stages, because it helps you refine your language so it's clear to people new to the project, check the timing of tasks and make sure everything works as expected.

In my next post I'll explain what happens in a usability test session, and share our design persona with you. In the post after that, I'll share some of the results... Post below if you have any questions or comments!

No comments:

Post a Comment