This is the first post in a four-part series on user testing for DH projects.

Let’s start extremely practical: what we can do in a single day towards user testing our own small-scale DH tools and projects?

Photo of affinity diagram on wall made of post-it notesTo know the affinity diagram is to fear it! Although it's a useful assessment tool, I'll be recommending some kinder ways of evaluating how your DH project treats its audience. (Photo by author's contextual inquiry team, Fall 2009)

1. Analytics! Analytics! Analytics! (Know Thy Audience)

Even if you have no funding and only an hour of free time, I cannot see any reason for not installing Google Analytics on your DH project website (or your blog or portfolio, for that matter). Google Analytics tracks visitors to your site and gives you information like your site’s busiest days of the week, how long visitors are spending on any given page, whether your readers are using mobile devices to access your site, and where in the world your visitors are from. All this is not only fascinating and flattering (neat! people are looking at your work!), but it helps you build a better website and project by giving you some free information on who’s interested in what pages of your site, and what kind of technological needs they have for accessing your content.

Drop me a comment if you'd enjoy a post on setting up and generating custom reports with Google Analytics to find out who’s using your DH website and how. It’s pretty easy to set up, though (but generating customized reports is a bit tricky): on any site, you can insert some lines of code the Google Analytics site will give you to start tracking; if you’re running Wordpress (WP), this is even easier: behold, the Google Analytics plugin! (On WP, you’ve probably also got the nifty Jetpack statistics running, which provide less powerful but still useful analyses of your site users than Google Analytics.)

2. What Do You Want to Study About Your Users?

If your time and money is limited, it might help to separate out what you most care about studying about your audience:

  • Usability: Can your site be used as you intend? Is it difficult to navigate or to understand how the content is presented? To assess usability, you might give a set of readers the same task to achieve through your site (e.g. “Tell me about how and why James Joyce’s drafts look different at different stages of writing”) and observe the time and steps each reader takes to provide you with an answer.
  • Good Usability or Bad Usability?: In a study headed by Jan Spyridakis*, researchers found that ease of retention of visually-accessed knowledge often meant... bad usability! Design that interrupts narrative flow, exercises working memory, and forces readers to establish context all served to better retention of information. When trying to apply understandings of the reading brain to DH project design, we really need to understand more about humanities-specific reading (i.e. long texts and research reading patterns). As we design for research and learning, we need to ask ourselves whether we are aiming at use or usability--do we want the user to go our way or their way? Are there ways to make these more compatible ends?
  • Use: How are people using your site--as intended, or for other ends? Using a tool like Google Analytics is a start toward knowing who’s using your stuff, but to go beyond demographics and what pages are getting visited, you’ll need to think about a tactic like soliciting use stories from site visitors.
  • Usefulness: If your site can be used as intended (usability), is it actually helping the people you’re looking to serve? A simple, quick embedded survey (e.g. from SurveyMonkey, which I think gives a free upgraded version to graduate students--at least it did in 2009) can let your visitors tell you whether they found what they were looking for, had any issues with your navigation or comprehension, or would like to see something new on your site.

3. Build On Existing Metrics (Standards of Measurement)

Depending on what you’re studying, you don’t necessarily need to build a metric from scratch--in fact, in some cases it’s best to adapt previous work. Here’s a section from my master’s thesis on basing my metrics on other researchers’ work, with the plainspeak take-aways below:

The instrument (metric I used to conduct research) is a combination of pieces of two pre-existing instruments derived for research on similar topics. A study by Koohang (2004)* assessed the usability of e-learning courseware with a focus on user's perceptions of the courseware's usability, much as the current study assesses learner's perceptions of digital text features and use; both Koohang's work and this study also looked at the effect of technology experience on user experience and educational success. A study by Harley et al. (2006)* assessed digital resource use in the undergraduate humanities and social sciences; as with the current study, Harley's emphasized the need for user studies to empirically understand the desires of users of digital learning tools.

The Likert item questions (spectrum of answers indicated by radio buttons) drew from both of the instruments used in these previous studies, as they provided a tested example of the assessment of a digital resource's user experience. The analysis of quantitative data also drew from Koohang's (2004) study; where Koohang compared years of Internet experience with assessments of e-learning courseware usability using ANOVAs, this study compared years of Internet experience and degree of experience with digital texts with the results of ten scaled-response questions assessing digital text use.

I wrote my master’s thesis shortly after finding out that what I was doing was digital humanities, so I’m certain my master's thesis discussion of the lack of DH user studies misses out at least good DH evaluation work that had been done before 2009, though not for lack of searching--such work is still difficult to locate (thus, the DH Now solicitation for writing on these topics). What isn’t as difficult is finding evaluation work from other domains such as education and computer science that can be useful in assessing at least your website-based work (e.g. the Koohang study mentioned above, which studies perceptions of usability instead of usability). I've bolded all the parallels I drew between their studies and mine; don't forget to be clear in any public write-up you create from your testing justifying how the metrics you created/adpated were the right ones (social science papers provide some good tips on how such a write-up differs from a traditional humanities paper).

If you don’t have a lot of time or experience with user testing, finding an existing metric that applies to your questions about audience means you can spend less time worrying whether your results will say what you think they say, and more time analyzing those results when you get them. Not only is it good to not reinvent the wheel, adapting another researchers’ metric (remember to cite!) lets you read about how that metric was applied and what the results and issues with the tool were.

I hope these quick ideas were helpful! Tomorrow, I'll be writing about a specific user audience that interests me: "amateur" users, aka citizen scholars or interested audience members from outside the academy/alt-ac world.

*: Spyridakis et al. "Using structural cues to guide readers on the internet". Information Design Journal 15(3), 242–259 *: Koohang, A. (2004). Expanding the Concept of Usability. Informing Science Journal, 7, 129-41. *: Harley, D. et al. (2006). Use and Users of Digital Resources: A Focus on Undergraduate Education in the Humanities and Social Sciences. Center for Studies in Higher Education.