This is the third post in a four-part series on user testing for DH projects. On Wednesday I discussed some ways of doing “Quick and Dirty DH User Testing”; on Thursday, I discussed my work with user testing for “amateur” user audiences.
Today, I'd like to share some research on DH user testing that I've found useful in my own work. These are sorted by focus and field and include some comments on the state of digital evaluation. I wrote my master’s thesis shortly after finding out that what I was doing was digital humanities, so the list of studies above surely misses out on some good DH evaluation work that I just wasn't aware of/didn't locate (the list also doesn't cover more recent research--something I'll remedy as I blog more about the new user study work I'm doing as part of my Ph.D. dissertation).:
1. Studying usage patterns of digital humanities resources: The LAIRAH (Log Analysis of Internet Resources in the Arts and Humanities) Project worked directly with users to determine "factors that may predispose a digital resource to become used or neglected in the long-term" (Warwick, Terras, Huntington, & Pappa, 2007) Warwick, C., Terras, M., Huntington, P., & Pappa, N. (2008). If You Build It Will They Come? The LAIRAH Study: Quantifying the Use of Online Resources in the Arts and Humanities through Statistical Analysis of User Log Data. Literary and Linguistic Computing 23(1), pp. 85-102..
2. Usability work geared at improving individual projects or features: Don et. al.'s 2007 work Don, A., Zheleva, E., Gregory, M., Tarkan, S., Auvil, L., Clement, T., Shneiderman, B., & Plaisant, C. 2007. Discovering interesting usage patterns in text collections: integrating text mining with visualization. In Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management (Lisbon, Portugal, November 06 - 10, 2007). CIKM '07. ACM, New York, NY. with the textual analysis tool FeatureLens.
3. User studies of digital resources conducted for/by groups outside of the digital humanities:
Teachers and students within the formal education system:
Digital libraries and their patrons
Scholarly workers using academic libraries as "information environments":
A study with scholar users of digital texts:
Who Wants to Know? Even in those studies that might benefit digital text development, the multiple agendas at play in the interpretation of digital resource user studies, from institution administrators to resource developers, may baffle those digital text creators who try to sound out their entire potential audience as to how “an exceptionally diverse set of digital resources is actually used" (Harley et al., 2006, section 1-2).
Reports. Several reports found that most digital cultural collections only informally model their intended audience and have made a call for the disciplined evaluation of digital text users. These reports include
Juola (2006) Juola, P. (2006). Killer Applications in Digital Humanities. Literary and Linguistic Computing. Retrieved August 21, 2009, from http://www.mathcs.duq.edu/~juola/papers.d/killer.pdf. decried this lack of user studies, identifying a "mismatch of expectations between the expected needs of audience (market) for the tools and the community’s actual needs" as a likely source of much unrealized potential with digital texts (p. 5). NINCH similarly found that digital texts are often erroneously designed around assumptions about user's needs based on “existing usage of analog resources” (2003). Such assumptions ignore the new possibilities presented by digital resources: "for instance, use of postcard collections has always been limited, but when made available in digital form their use rises dramatically; ease of access is the key” (NINCH). The discovery of such new uses for digitized materials underlines the need for direct user evaluation: “only by carrying out evaluation with our users can we find out how the digital resources we create are actually being used” (NINCH).
The increasing ubiquity of the digital text is paralleled by the increasing importance of empirical evidence for the worth of these projects and for the needs of their users. Digital text developers need to formally gather feedback from amateur users, rather than developing user personae through thought experiments or relying on informal models of their needs; if it seems obvious to us that what we create is useful, we should be able to develop metrics that demonstrate this success less subjectively.
I'll finish up this post series tomorrow with some more thoughts on "Testing DH Sites: More on Use, Usability, and Usefulness".