User-testing a digital edition: Getting the feedback you need Sep 16, 2014 • Amanda Wyatt Visconti Has anyone ever managed to get the type of website feedback they requested? How do you do it? I tend to get grammar, grammar, and typos.— Rachel Donahue (@sheepeeh) August 7, 2014 I've been thinking about specific questions I want to ask during the user testing for Infinite Ulysses as part of my dissertation project—more specifically, Rachel's tweet had me thinking about how to describe to user-testing volunteers what kind of feedback I'm seeking. I came up with some statements that model the types of thoughts users might have that I'd like to know about. For the first phase of beta-testing on my project, I'll ask testers some abbreviated form of: Have one of the following thoughts, or similar? Please elaborate: "I wanted to see ___ but didn't / couldn't locate it / stopped looking for it even though it's probably there" "This is broken" "This is cool" "This could be done better (by ___?)" "This doesn't work how I expected (and that is good / bad /should be changed ___ way)" "Where is ___ again (that I used before" "This requires too many steps / is hard to remember how to use" "Don't see how / why I'd use this" "I'd use this ___ way in my reading / teaching / work" "I gave up (where, when)" "____ would make me like this site / like it better / use it regularly" "I'm not interested in Ulysses, but I'd like to use this interface for a different novel / non-fiction text / (other thing)" "Starting to read the text on the site took too long (too much to learn / too much text or instruction to wade through) / took the right amount of time (intro text and instruction was appreciated or easily skippable, or site use was intuitive enough to get started) "I would recommend this site (to x person or y type of person)" "The problem with this site is ___" "Reading on this site would be easier if ___" "I wish you'd add ___" Testing stages for Infinite Ulysses As I start to get all my design and code ducks in a row for this project this month, I'll be moving into a cycle of user-testing and improving the website in response to user feedback. I'll be testing in four chunks: Self-testing: Working through the site on my own to locate any really obvious issues; for example, I'll step through the entire process of signing up and reading a chapter on the website to look for problems. I'll step through the site with different user personas in mind (imitating the different backgrounds and needs of some of my target audiences, such as first-time readers of Ulysses and teachers using the site with a class). I'll also apply various website assessment tools such as validators for code and accessibility. Alpha testing: Next, I'll run some low-stakes testing by inviting my dissertation committee, close friends, and family to break the site. This might get me to a point where the next stage of testers aren't hitting any problems big enough to take the site down or make testers wait while I take days to fix a difficult issue. Beta testing: I'll conduct beta-testing this fall and spring by opening the site to exploration and use by the people who have generously volunteered via this sign-up form. Phase I will take place this fall and take feedback from volunteers using the site individually; Phase II will take place in winter and early spring, continuing individual use of the site, and adding in people using the site in groups, such as teachers with their classes, or book clubs reading together. Post-release testing: I'll continue to take feedback once the site goes live for use by anyone in June 2015, although I'll need to scale down work on requested enhancements and focus on bug fixes and continued data gathering/analysis on how people use the site to read. Setting up site logging and Google Analytics on my site will help me monitor use as time allows. User testing how? I'll be building on my user-testing experience from my master's research and the BitCurator project, as well as trying some new tactics. The thesis for my information master's degree involved a use study exploring how members of the public (and others with a content interest in a website, but lack of experience with digital humanities and edition/archives commonplaces) experienced scholar-focused DH sites, using the Blake Archive and Whitman Archive as examples. I was particularly interested in identifying small design and development changes that could be made to such sites to better welcome a public humanities audience. For my master's research, I built off existing user study metrics from a related field (learning technology) as well as creating and testing questions suggested by my research questions; feedback was gathered using a web survey, which produced both quantitative and qualitative data for coding and statistical analysis. I'm hoping to further set up web surveys for willing site visitors to fill out after using the site shorter web pop-up questions—only for users who check a box agreeing to these—that ask quick questions about current site use (perhaps incentivized with special digital site badges, or with real stickers if I can get some funding for printing) in-person meetings with volunteers where I observe them interacting with the site, sometimes having them talk aloud to me, or with a partner, as to their reactions and questions as they use the site various automated ways of studying site use, such as Google Analytics and Drupal site logging For bug reports and feature requests, site visitors will be able to send me feedback (either via email or a web form) or submit an issue to the project's GitHub repository. All bugs/enhancement feedback will become GitHub issues, but I don't want to make users create a GitHub account and/or figure out how to submit issues if they don't want to. I'll be able to add a label to each issue (bug, enhancement request, duplicate of another request, out of scope for finishing my dissertation but a good idea for some day, and won't fix for things I won't address and/or can't replicate). I'm using Zapier (a If This Then That -like service) to automate sending any issues labeled as bugs or enhancements that I want to fix before my dissertation defense to Basecamp, in an appropriate task list and with a "due in x days" deadline tacked on. To read more about user testing in the digital humanities, check out my posts about usability personas, the relationships among testing for use/usability/usefulness, existing resources about DH user testing, designing and testing for a public humanities audience, and some ways to do quick and dirty DH user testing User testing for the long haul I've got one major technical concern about this project (which I'll discuss in a later post), and one big research-design concern—both related to the "Infinite"-ness of this digital edition. My research design concern is the length of this user-testing; I'm pursuing this project as my doctoral dissertation, and as such I'm hoping to defend the project and receive my degree in a timely manner. Usability testing can be done over the course of a few months of users visiting the site and my iterating the design and code; testing use and usefulness, as in how people want to use the site (i.e. perhaps differently from how I imagined), how people read Ulysses (a long and complex book which, if you're not attempting it in a class or book club, might take you months to read), and what happens to a text like Ulysses as it accrues readers, their annotations, and the assessment the social modules lets readers place on others' annotations (the more readers and annotations, the more we can learn) are things I can begin to gather data on, and begin to speculate on what trends that data suggests we'll see, but I won't be able to give them the full treatment of years of data gathering within the scope of the dissertation. To address this, I'll both analyze the data I do gather over the course of months of user testing, and try to automate further data-gathering on the site so that I can supplement that analysis every few months or years without requiring too much effort or funding to sustain this work. I successfully defended my digital humanities doctoral dissertation in Spring 2015. The Infinite Ulysses social+digital reading platform that was part of that project has been retired into an archival form: a static site with a slideshow tour of past interactive features. Cite this post: Visconti, Amanda Wyatt. “User-testing a digital edition: Getting the feedback you need”. Published September 16, 2014 on the Literature Geek research blog. https://literaturegeek.com/2014/09/16/user-testing-a-digital-edition-getting-the-feedback-you-need. Accessed on .