Originally posted on the Scholars’ Lab blog on February 27, 2019.

Healthy, diverse online learning communities depend on the labor of community design: unseen and often stressful work such as moderation, shaping discussions, and encouraging positive community behavior. As more opportunities emerge for learning online as part of a virtual community, how can we:

  1. recognize (credit, publicize, reward, treat as scholarly work) and
  2. reinforce (design, automate, fairly distribute the labor, deal with mental health impacts)
    the labor of community design? As marginalized identities are most at risk for harm when interacting with others online, support for moderation is support for community inclusion.

I write from my experiences moderating both my digital dissertation project Infinite Ulysses (a social platform for annotating/commenting on James Joyce’s novel) and the Digital Humanities Slack I started (a set of themed chat rooms with over 2,000 digital humanist members; co-moderated by Alan G. Pike, Sam Abrams, Alex Gil, Brandon Walsh, Ed Summers, Paige C. Morgan, Jeremy Boggs, Eleanor Dickson, Liz Rodrigues, and Erin Pappas). It’s been exhilarating using technology to collaborate with folks I don’t know: if you’ve taught DH, think of the students who respond to your lessons like “DH is exactly what I want to do, I just didn’t have a term for it before”, but expand to include folks from all over the world and more diverse walks of life. But it’s also been a significant source of anxiety, balancing my responsibility to the community with my desire to help folks learn to be more positive influences on that community. That stress was half of why I moved Infinite Ulysses from a Drupal site allowing commenting, to a static archived version without commenting (the other half was some serious thinking about how I could best contribute to the DH I want to see, and realizing that just for me personally, I need to move away from Joyce and Modernist studies to best accomplish that).

I do want to note that the challenges of moderating online DH and other academic/professional communities are, at least for me, privileged through my ability to choose my level of work (or opt out completely) without hurting my livelihood or family; compared with the pain, lack of compensation, lack of respect, and more evils experienced by folks like those who moderate YoutTube and Facebook, working with smaller communities that have some norms (though inequitable) and professional consequences (though often used as a weapon against rather than a protector of folks with marginalized identities) are less pressing problems.

Why I’m blogging today

This post is largely drawn from a non-successful IMLS pre-proposal I PI’d (Spring 2018) for an NLG planning grant to explore, identify key challenges, and plan pilot community experimental approaches to aid the foundational labor of moderators underpinning online learning communities, titled “A Moderate Proposal: Recognizing and Reinforcing Online Community Moderation to Benefit Diverse Learners” and co-designed with colleagues Jeremy Boggs, Katherine Donnally, Shane Lin, Laura Miller, and Brandon Walsh. (If you like the title pun, you may also appreciate that our proposal drafting GDoc was titled “DigitalHumanities.club”, after the domain that I own because it’s awesome, but that I was also using to explore Mastodon and Mattermost for better communities.)

I finally got around to this post due to cool stuff two DHers are up to:

  1. Scott Weingart’s Twitter thread on Casey Newton’s recent article.

I’ve only skimmed Newton’s piece, but I’ve read and admired Sarah T. Robert’s more longstanding work in this area:

  1. My SLab colleague Ammon Shepherd has kicked off a series of blog posts around the challenges of archiving DH work, with what promises to be a special focus on the technical and especially sysadmin work unlerying these challenges. You can read the first post in the series now, but Ammon’s also orchestrating a really cool system of inviting UVA Library colleagues to co-write the rest of the series with him, complete with clear documentation on how to take part and make the discussion better.

Thanks to Scott and Ammon for being awesome scholars doing work collaboratively and in public!

A moderate proposal

Community design requires moderation and scaffolding. Moderation is both the work of controlling spam and intentional harassment in a community, as well as designing and implementing a code of conduct (e.g. working behind the scenes with a community member whose words impact others negatively, but who wants to become a better community member). Scaffolding is the work of maintaining and nurturing a community: suggesting conversation topics, encouraging community member involvement, amplifying marginalized voices, and otherwise enacting strategies to grow a healthy and active community. Creating the technical platform for an online community is relatively simple, given existing open-source versions of popular social media platforms. Rather, the challenge to improving moderation is a social project: habituating community members to behavior that reduces harassment, encourages real accountability to others in your community, distributes the stress of moderating, and recognizes/credits/rewards people who moderate and otherwise advance positive community design.

I’ve focused on Twitter, Slack, and Mastodon as text-heavy platforms with public and frequent use by digital humanists, plus Civil Comments and the Mozilla Coral Project as technical solutions to online community challenges. I’m also interested in these past or current DHy communities and the scholars shaping them: DH Q&A, Humanities Commons, DHthis, DH Commons, the DLF Forum, DH Slack, HASTAC, DHNow, the AADHUM Slack, and the Documenting the Now Slack.

We’ve since updated our thinking and planning, but when I submitted our grant preproposal a year ago, we imagined the following activites and outcomes:

Activities

The project team would identify and invite 10 experts in online learning community-building, with diverse representation of gender, race, and other identities, from libraries, archives, museums, and academia. Our initial work draws on a symposium with these advisors to spark collaborations, identify key challenges, and plan out steps for piloting and assessing improvements to social systems of moderation:

  1. Project team reviews literature and existing communities: Distribute Twitter and DH Slack surveys to gather anecdotes about good and bad online academic communities and their moderation systems. Use team’s existing text analysis skills to explore frequent vocabulary and sentiment in the discussions on a sample community, the Digital Humanities Slack (2k+ community members), if* DH Slack community decides this is an acceptable analysis (read more about the community’s decision to keep the Slack ephemeral via a non-paid Slack plan in this post). Create an annotated bibliography plus pre-symposium reading list. Assign each advisory board member to read 3 articles and report useful takeaway ideas from these at the symposium.
  2. 2-day symposium of 10 advisory board members with Scholars’ Lab staff and UVA Library colleagues: Project team presents results of the DH Slack message text analysis* as a grounding case study. Advisory board gives lightning talks summarizing their assigned readings. Goals set for the rest of the symposium. Facilitated design jam sessions, paper prototyping, breakout groups addressing specific challenges, such as
    1: amplifying credit for moderators, making their labor institutionally legible for promotion, addressing gendering of work;
    2: templates for difficult moderation conversations with community members, tech features that could support social needs such as shadow-banning, moderator self-care.
  3. After symposium: Project team prepares report of best practices to support online learning community moderation, shares with advisory board for feedback, disseminates to public. Team outlines next steps, shares with advisory board for feedback. Probable steps include identifying both technical and social solutions to pilot, formally approaching communities to collaborate on testing practices (e.g. MLA/Humanities Commons, ACH, DLF), scheduling pilots and assessment, prototyping and testing ideas, and reporting back to public. We take the NEH “Off the Tracks” workshop resulting in recommendations for ethically crediting collaborators as a model for all of this work, and commit to thoroughly and accurately crediting all participants in these projects.

Outcomes

We will identify project staffers with expertise in ethical design practices, and charge them with holding our focus on improving online communities for marginalized identities, while also documenting and publishing the conversations and findings occurring through the planning phase to ensure others can build on our work. During this grant’s planning phase we will create:

  • An annotated bibliography and reading list
  • A symposium whitepaper covering the ideas, best practice suggestions, and conversations a diverse advisory board of experts generate during the symposium
  • A public conversation around crediting moderation labor and improving community moderation, connecting scholars interested in this work
  • A written plan outlining next steps for pilot projects designing, building, and assessing various technical and social features for their impact on online community moderation.

Experimental approaches may include:

  • Strategies to protect users from one another, unwanted surveillance, or commercial interests
  • Documented design-driven approaches to building more ethical academic social media platforms
  • A code of conduct template in support of community design labor
  • Model a technical platform for online learning communities that is designed from the start with an ethic of care, with particular focus on making the system useful to those identities usually marginalized by commercial social media platforms
  • Text analysis results discussion from analysis of DH Slack community vocabulary and sentiment

Interested? Know things?

If anything in this post interests you (future collaboration?), overlaps with your current work, or could benefit from my reading and citing you or others, please let me know at visconti@virginia.edu. Thank you!

Thanks to colleagues Jeremy Boggs, Katherine Donnally, Shane Lin, Laura Miller, and Brandon Walsh for co-designing of the grant proposal, and to Ammon Shepherd for helping me think about the technical sustainability aspects of better online moderation experiences.