Most of the online advice I've encountered for evaluating non-traditional scholarly projects seems to focus on seeking jobs or applying for tenure with a portfolio that is significantly digital; the process of translating that body of work for a reviewing committee matches up pretty well with how evaluation works for the dissertation defense process for a non-traditional and/or digital project (and I've covered some of these suggestions at the end of this post). Charting progress and recording effort throughout the non-traditional dissertation, though, is a mostly undiscussed topic.
Having a unique format scrubs a lot of the traditional methods for evaluating these factors: maybe there's no "chapter" to turn in, or you've undertaken a critical building project in a programming language your committee can't read (and even if your committee could read through the entire body of code you author, you wouldn't want them to need to—just as with a monograph project, you want to be respectful of mentors' time by asking for feedback at specific milestones and with some kind of concise artifact to assess).
My dissertation has a unique format—there's no monograph, and the bulk of the work is blogging, design, coding, user-testing, and analysis of testing data. I've come up with a comfortable set of practices that let me keep my dissertation committee updated and on-board with my expected outcomes, give me some handy accountability to produce a reasonable amount of weekly work, and clearly document my effort and progress in case administrators even request evidence of the equivalency of my research with that of students using a more time-tested format. This post will cover how I'm recording my progress and sharing recognizable milestones with my advisor and committee; I use a combination of GitHub, Zapier (an IFTT-like service), Basecamp, scheduled emails, and GoogleDocs.
I'm using GitHub to manage my code (more below), but it turns out GitHub has a lot of nifty ways for me to let my advisor and committee have a sense of my coding progress and effort—visualizations that would work just as well for someone using GitHub to track changes in their writing, for example. I've added my advisor as a collaborator on my dissertation code repository so that he can also see the progress charted there.
Example code commits give a summary of what the code I'm adding to the repository achieves (e.g. did it fix a specific bug? add a new feature?).
Whether or not the text you're working with is code, GitHub lets you track frequency and quantity of work, and offers some nifty visualizations to give you an overview of your work habits. With a project where you're learning or advancing a technical skill, this can be a nice way of showing that even if something you spent two weeks on didn't pan out (e.g. I ended up not using my Islandora digital edition work), you were doing active work during that time.
GitHub visualization options include (note all examples are from a repo opened at the end of August):
I'm managing my dissertation code via a completely private repository that regularly gets pulled into a private repository that only certain collaborators can view (sharing read-only code to only select people is slightly roundabout).
I'm using GitHub to backup and track my coding work for all the regular webdev reasons (lots of copies keeps stuff safe, it will be easy to release code to the public when it's ready, etc.). Part of my deliverable work is good documentation of my coding, so I need to keep the repository README updated. (A repository, aka "repo", is the place on GitHub where my code is stored, which you can visit and view as webpages; a README is a text file sharing some information about that repo, visible at the bottom of the front page of a GitHub repository).
README file explaining what the code in the repository does/will do.
I'm using the README in two ways:
Tracking the provenance of various code collections I'm building off.
My project focuses on creating and testing new interface abilities by building off existing wheels such as Drupal (for the site structure itself) and Annotator.js (for the core textual annotation functionality). Many of the terms of the various FOSS licenses on these code collections dictate that I attribute this work properly, but even without such licenses visible and detailed attribution would be extremely important to me. Everyone should get credit and thanks for sharing their work with others! (See the awesome "Collaborators' Bill of Rights" that came out of a MITH workshop for more on why correct credit should matter to everyone). Plus, making it clear that this project is founded on a variety of existing code sets might encourage others to think about platforms that could emerge from combining or augmenting existing code, giving us new tools and approaches instead of sinking time into reinventing wheels. Since I'll also be producing a lot of my own reusable code (plugins, interface), this page will also help me make it clear to future employers what pieces of code are authored by me.
Basecamp is a project management tool that also works well for teaching courses (and it's free for teachers!) or working on solo projects. Among other features, it lets you create various task lists containing tasks with due dates:
Some of the tasks in my Basecamp project; this list is helping me track what needs to be done before I start alpha-testing.
Basecamp will also help me automatically turn feedback from user-testing into tasks on my dissertation to-do list, using Zapier, a service that connects various other internet services (e.g. get an email when one of your tweets is RT'd, and other IFTTT-like options).
Using Zapier to create Basecamp tasks from GitHub issues.
GitHub repositories allow people to create "issues", which are like comments on the code or software in the repository; I'll then place labels on these issues depending on what type of problem they represent: a bug, a feature request, or something out of scope for the dissertation project. Once issues get a label, they automatically appear on a Basecamp task-list as to-dos containing the text in the issue and assigning me a due date to address that issue in the next two weeks. You can get started similarly managing GitHub issues using this Zap by using the template I've shared.
Some example issues on my GitHub repository; each issue automatically became a to-do with a due date on my Basecamp project after I labeled the issue, thanks to Zapier.
I've scheduled myself to create a weekly email to my advisor covering three things:
These emails are both useful in keeping my advisor up to date with my work and as a record of that progress, should we ever need to demonstrate it to university administration (although I've gotten written consent for my unique dissertation format from the interested parties as described in this post, so I'm not expecting to actually need these records). Weekly emails are also effective for me as a carrot and/or stick: I know I need to make some progress during the week or the email will be empty, and writing the email reminds me of all the things I got done in a week.
My four dissertation committee members have been awesomely willing to meet with me as a team. I've met with them all in the same room twice so far at milestones during my dissertation process (once while designing the dissertation and preparing the prospectus, and once when I was starting to have a good idea of what the final project deliverables would look like). We'll be meeting again in the next weeks as I finish site functionality and begin user-testing.
Meeting as a team has been a great way to make feedback into a conversation among several areas of expertise. It's also been great to reaffirm that the committee understands what the deliverables at the time of the defense will look like, and that they still find that outcome satisfies the requirements of a dissertation in my department. Once I find a meeting time that works for everyone, I send them an agenda for what I'll cover in the meeting; this helps me use their time wisely, as well as let them know the type of feedback I'm seeking or any questions I might need them to answer (e.g. I let them know I'd be checking if they still felt my deliverables worked as a full dissertation, so they can revisit my project beforehand and come to the meeting prepared with any questions or concerns). To make getting prepared for the meeting easier, I'll also share a GoogleDoc highlighting what I've accomplished since our last meeting; how my project, timeline, or expected deliverables have changed (if at all); and a list of linked titles for the blogs I've published since the last meeting.
That's how I'm tracking my work throughout the dissertation, but how do you establish when a non-traditional dissertation fulfills the normal checks on dissertation completion—has the project demonstrated the ability to design and pursue a scholarly research project at the doctoral level? If you parallel the dissertation with the tenure-application process, you increase the amount of available reading about digital humanities research evaluation. There are guidelines like the MLA's for digital projects, the Center for Digital Research in the Humanities' resource page on "Promotion & Tenure Criteria for Assessing Digital Research in the Humanities", and the Carolina Digital Humanities Initiative's resource page on "Valuing and Evaluating DH Practice". Digital humanities scholars who seek tenure are publicly blogging about their experiences more often, and tips about packaging for tenure review blog posts, tweets, and other examples of scholarly public engagement are frequent among digital humanists on Twitter. Just as tenure applications are starting to include ways scholars have engaged the public, keeping track of online reactions to your project (blog post reading stats, Twitter comments, and RTs for tweets about your dissertation) can serve as a testament to the meaningfulness of your research and dissertation format, as well as offer a kind of shallow mid-stage peer review (thinking along the lines of: retweets of links to posts about your project may be an extremely mild and imprecise form of peer review).