I've been thinking about specific questions I want to ask during the user testing for Infinite Ulysses as part of my dissertation project—more specifically, Rachel's tweet had me thinking about how to describe to user-testing volunteers what kind of feedback I'm seeking.  I came up with some statements that model the types of thoughts users might have that I'd like to know about. For the first phase of beta-testing on my project, I'll ask testers some abbreviated form of:

Have one of the following thoughts, or similar? Please elaborate:

  1. "I wanted to see ___ but didn't / couldn't locate it / stopped looking for it even though it's probably there"
  2. "This is broken"
  3. "This is cool"
  4. "This could be done better (by ___?)"
  5. "This doesn't work how I expected (and that is good / bad /should be changed ___ way)"
  6. "Where is ___ again (that I used before"
  7. "This requires too many steps / is hard to remember how to use"
  8. "Don't see how / why I'd use this"
  9. "I'd use this ___ way in my reading / teaching / work"
  10. "I gave up (where, when)"
  11. "____ would make me like this site / like it better / use it regularly"
  12. "I'm not interested in Ulysses, but I'd like to use this interface for a different novel / non-fiction text / (other thing)"
  13. "Starting to read the text on the site took too long (too much to learn / too much text or instruction to wade through) / took the right amount of time (intro text and instruction was appreciated or easily skippable, or site use was intuitive enough to get started)
  14. "I would recommend this site (to x person or y type of person)"
  15. "The problem with this site is ___"
  16. "Reading on this site would be easier if ___"
  17. "I wish you'd add ___"

Testing stages for Infinite Ulysses

As I start to get all my design and code ducks in a row for this project this month, I'll be moving into a cycle of user-testing and improving the website in response to user feedback. I'll be testing in four chunks:

  1. Self-testing: Working through the site on my own to locate any really obvious issues; for example, I'll step through the entire process of signing up and reading a chapter on the website to look for problems. I'll step through the site with different user personas in mind (imitating the different backgrounds and needs of some of my target audiences, such as first-time readers of Ulysses and teachers using the site with a class). I'll also apply various website assessment tools such as validators for code and accessibility.
  2. Alpha testing: Next, I'll run some low-stakes testing by inviting my dissertation committee, close friends, and family to break the site. This might get me to a point where the next stage of testers aren't hitting any problems big enough to take the site down or make testers wait while I take days to fix a difficult issue.
  3. Beta testing: I'll conduct beta-testing this fall and spring by opening the site to exploration and use by the people who have generously volunteered via this sign-up form. Phase I will take place this fall and take feedback from volunteers using the site individually; Phase II will take place in winter and early spring, continuing individual use of the site, and adding in people using the site in groups, such as teachers with their classes, or book clubs reading together.
  4. Post-release testing: I'll continue to take feedback once the site goes live for use by anyone in June 2015, although I'll need to scale down work on requested enhancements and focus on bug fixes and continued data gathering/analysis on how people use the site to read. Setting up site logging and Google Analytics on my site will help me monitor use as time allows.

User testing how?

I'll be building on my user-testing experience from my master's research and the BitCurator project, as well as trying some new tactics.

The thesis for my information master's degree involved a use study exploring how members of the public (and others with a content interest in a website, but lack of experience with digital humanities and edition/archives commonplaces) experienced scholar-focused DH sites, using the Blake Archive and Whitman Archive as examples. I was particularly interested in identifying small design and development changes that could be made to such sites to better welcome a public humanities audience. For my master's research, I built off existing user study metrics from a related field (learning technology) as well as creating and testing questions suggested by my research questions; feedback was gathered using a web survey, which produced both quantitative and qualitative data for coding and statistical analysis.

I'm hoping to further set up

  • web surveys for willing site visitors to fill out after using the site
  • shorter web pop-up questions—only for users who check a box agreeing to these—that ask quick questions about current site use (perhaps incentivized with special digital site badges, or with real stickers if I can get some funding for printing)
  • in-person meetings with volunteers where I observe them interacting with the site, sometimes having them talk aloud to me, or with a partner, as to their reactions and questions as they use the site
  • various automated ways of studying site use, such as Google Analytics and Drupal site logging

For bug reports and feature requests, site visitors will be able to send me feedback (either via email or a web form) or submit an issue to the project's GitHub repository. All bugs/enhancement feedback will become GitHub issues, but I don't want to make users create a GitHub account and/or figure out how to submit issues if they don't want to. I'll be able to add a label to each issue (bug, enhancement request, duplicate of another request, out of scope for finishing my dissertation but a good idea for some day, and won't fix for things I won't address and/or can't replicate). I'm using Zapier (a If This Then That -like service) to automate sending any issues labeled as bugs or enhancements that I want to fix before my dissertation defense to Basecamp, in an appropriate task list and with a "due in x days" deadline tacked on.

To read more about user testing in the digital humanities, check out my posts about

User testing for the long haul

I've got one major technical concern about this project (which I'll discuss in a later post), and one big research-design concern—both related to the "Infinite"-ness of this digital edition. My research design concern is the length of this user-testing; I'm pursuing this project as my doctoral dissertation, and as such I'm hoping to defend the project and receive my degree in a timely manner. Usability testing can be done over the course of a few months of users visiting the site and my iterating the design and code; testing use and usefulness, as in

  1. how people want to use the site (i.e. perhaps differently from how I imagined),
  2. how people read Ulysses (a long and complex book which, if you're not attempting it in a class or book club, might take you months to read), and
  3. what happens to a text like Ulysses as it accrues readers, their annotations, and the assessment the social modules lets readers place on others' annotations (the more readers and annotations, the more we can learn)

are things I can begin to gather data on, and begin to speculate on what trends that data suggests we'll see, but I won't be able to give them the full treatment of years of data gathering within the scope of the dissertation. To address this, I'll both analyze the data I do gather over the course of months of user testing, and try to automate further data-gathering on the site so that I can supplement that analysis every few months or years without requiring too much effort or funding to sustain this work.

I successfully defended my digital humanities doctoral dissertation in Spring 2015. The now-available Infinite Ulysses social+digital reading platform is part of that project; come read with us!