The Filter Problem at the CODEX Literary Hackathon Jan 5, 2016 • Amanda Wyatt Visconti I'm at the MIT Media Lab's CODEX Literary Hackathon this weekend—an event focused on prototyping tools for the future of digital reading, books, libraries, and publishing. Below is my description of the annotation "filter problem", the research challenge that I'm bringing to the table at the event (although I might end up working on something else—we'll see!). Update: I pitched "Team Annotation" and we ended up getting around ten people together to brainstorm annotation, before breaking into several smaller project teams. Read our Team Annotation notes here and see what I ended up hacking on here. Better online communities and the “filter problem” Making “internet comments”, web annotation, and other types of online communal knowledge-making not terrible means balancing our inability to read everything (all the comments on a topic) with a desire to defeat the echo-chamber effect of Facebook-style algorithmic personalization based on what we like/favorite the most, which isn't a good fit for reading and learning—we want to hear differing interpretations and ideas that challenge and unsettle us, rather than get a smooth Facebook experience. I'm interested in code and design work addressing the need for nuanced information filtering, balancing digital fixes like mechanics for crowdsourcing curation & moderation with community design solutions like interfaces that scaffold good digital citizenship. I ran my first test of these ideas about balancing algorithmic and community curation of public discussion by building InfiniteUlysses.com, experimenting with crowd commentary around a notoriously difficult novel and rejuvenating a platform traditionally aimed only at scholars (the digital edition) via social mechanics used on sites like Reddit and StackExchange. My design both supported social textual annotation, and also personalized (at a very basic level) the display of the resulting huge quantity of public-authored annotations to fit each reader's needs and background. Our filtering work could be on the Infinite Ulysses site itself, elsewhere using Infinite Ulysses' code, or something else entirely. Reddit + Ulysses, anyone? This weekend, might we experiment with balancing both coded mechanics and social curation to give people better reading experiences: a non-overwhelming amount of annotations per page, tuned to their interests, background, and needs? Some steps we could start to brainstorm and prototype: brainstorm more ways of filtering annotations than by date, author, popularity, and popularity of the author's other annotations, then think about how we can make it easy and enjoyable for readers to add the metadata allowing this additional annotation filtering. Someone interested in algorithmic learning might draft a questionnaire that guesses the filters a given reader might want applied to their experience. Someone interested in community design could work on a set of best practices for reading sites to implement before inviting social annotation (e.g. how to seed the annotations with model commentary). I'm also especially interested in anything related to annotation, social reading, and communities designed around various types of "difficult" text (challenging to read, politically contentious, requiring deep field or historical knowledge to understand...) Cite this post: Visconti, Amanda Wyatt. “The Filter Problem at the CODEX Literary Hackathon”. Published January 05, 2016 on the Literature Geek research blog. https://literaturegeek.com/2016/01/04/codex. Accessed on .