This is the third post in a four-part series on user testing for DH projects. On Wednesday I discussed some ways of doing “Quick and Dirty DH User Testing”; on Thursday, I discussed my work with user testing for “amateur” user audiences.

Today, I'd like to share some research on DH user testing that I've found useful in my own work. These are sorted by focus and field and include some comments on the state of digital evaluation. I wrote my master’s thesis shortly after finding out that what I was doing was digital humanities, so the list of studies above surely misses out on some good DH evaluation work that I just wasn't aware of/didn't locate (the list also doesn't cover more recent research--something I'll remedy as I blog more about the new user study work I'm doing as part of my Ph.D. dissertation).:

1. Studying usage patterns of digital humanities resources: The LAIRAH (Log Analysis of Internet Resources in the Arts and Humanities) Project worked directly with users to determine "factors that may predispose a digital resource to become used or neglected in the long-term" (Warwick, Terras, Huntington, & Pappa, 2007) Warwick, C., Terras, M., Huntington, P., & Pappa, N. (2008). If You Build It Will They Come? The LAIRAH Study: Quantifying the Use of Online Resources in the Arts and Humanities through Statistical Analysis of User Log Data. Literary and Linguistic Computing 23(1), pp. 85-102..

2. Usability work geared at improving individual projects or features: Don et. al.'s 2007 work Don, A., Zheleva, E., Gregory, M., Tarkan, S., Auvil, L., Clement, T., Shneiderman, B., & Plaisant, C. 2007. Discovering interesting usage patterns in text collections: integrating text mining with visualization. In Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management (Lisbon, Portugal, November 06 - 10, 2007). CIKM '07. ACM, New York, NY. with the textual analysis tool FeatureLens.

3. User studies of digital resources conducted for/by groups outside of the digital humanities:

Teachers and students within the formal education system:

  • Moursund & Bielefeldt, 1999 Moursund, D., & Bielefeldt, T. (1999). Will New Teachers Be Prepared To Teach in a Digital Age? A National Survey on Information Technology in Teacher Education. Milken Exchange on Education Technology: Santa Monica, CA.
  • Crowther, Keller, & Waddoups, 2004 Crowther, M. S., Keller, C. C., & Waddoups, G. L. (2004). Improving the quality and  effectiveness of computer-mediated instruction through usability evaluations. British  Journal of Educational Technology 35 (3), 289-304.

Digital libraries and their patrons

  • Hill et al., 1997 Hill, L. L., et al. (1997). User evaluation: Summary of the methodologies and results for the Alexandria Digital Library, University of California at Santa Barbara. Paper presented at American Society for Information Science and Technology Annual Meeting, Washington D.C., November.
  • Dervin, Connway, & Prabha, 2004 Dervin, B., Connaway, L. S., & Prabha, C. (2004). Sense-Making the Information Confluence: The Whys and Hows of College and University User Satisfying of Information Needs. The Online Computer Library Center (OCLC), Ohio State University, Dublin, Ohio.

Scholarly workers using academic libraries as "information environments":

  • Brockman, Neumann, Palmer, & Tidline, 2001 Brockman, W. S., Neumann, L., Palmer, C. L., & Tidline, T. J. (2001). Scholarly Work in the Humanities and the Evolving Information Environment. Digital Library Federation and Council on Library and Information Resources (CLIR), Washington, D.C.
  • Friedlander, 2002 Friedlander, A. (2002). Dimensions and Use of the Scholarly Information Environment. Digital Library Federation and Council on Library and Information Resources (CLIR), Washington, D.C.
  • Troll Covey, 2002 Troll Covey, D. (2002). Usage and Usability Assessment: Library Practices and Concerns. Digital Library Federation and Council on Library and Information Resources (CLIR), Washington, D.C.

A study with scholar users of digital texts:

  • Sukovic, 2008 Sukovic, S. (2008). Convergent Flows: Humanities Scholars and Their Interactions with Electronic Texts. The Library Quarterly 78(3), pp. 263–84.

Who Wants to Know? Even in those studies that might benefit digital text development, the multiple agendas at play in the interpretation of digital resource user studies, from institution administrators to resource developers, may baffle those digital text creators who try to sound out their entire potential audience as to how “an exceptionally diverse set of digital resources is actually used" (Harley et al., 2006, section 1-2).

Reports. Several reports found that most digital cultural collections only informally model their intended audience and have made a call for the disciplined evaluation of digital text users. These reports include

  • The Alice Grant Consulting Report in 2003 Alice Grant Consulting (2003). Evaluation of Digital Cultural Content: Analysis of Evaluation Material. Retrieved August 29, 2009 from the Digital Cultural Content Forum,
  • The National Initiative for a Networked Cultural Heritage (NINCH) “Guide to Good Practice in the Digital Representation and Management of Cultural Heritage Materials” The NINCH Guide to Good Practice in the Digital Representation and Management of Cultural Heritage Materials: XII. Assessment of Projects by User Evaluation (2003). NINCH (National Initiative for a Networked Cultural Heritage; authoring body). Retrieved August 21, 2009, from
  • The Center for Studies in Higher Education’s “Online Educational Resources: Why Study Users?” meeting in 2005 See Harley et al. as well as

Juola (2006) Juola, P. (2006). Killer Applications in Digital Humanities. Literary and Linguistic Computing. Retrieved August 21, 2009, from decried this lack of user studies, identifying a "mismatch of expectations between the expected needs of audience (market) for the tools and the community’s actual needs" as a likely source of much unrealized potential with digital texts (p. 5). NINCH similarly found that digital texts are often erroneously designed around assumptions about user's needs based on “existing usage of analog resources” (2003). Such assumptions ignore the new possibilities presented by digital resources: "for instance, use of postcard collections has always been limited, but when made available in digital form their use rises dramatically; ease of access is the key” (NINCH). The discovery of such new uses for digitized materials underlines the need for direct user evaluation: “only by carrying out evaluation with our users can we find out how the digital resources we create are actually being used” (NINCH).

The increasing ubiquity of the digital text is paralleled by the increasing importance of empirical evidence for the worth of these projects and for the needs of their users. Digital text developers need to formally gather feedback from amateur users, rather than developing user personae through thought experiments or relying on informal models of their needs; if it seems obvious to us that what we create is useful, we should be able to develop metrics that demonstrate this success less subjectively.

I'll finish up this post series tomorrow with some more thoughts on "Testing DH Sites: More on Use, Usability, and Usefulness".