Comment me, bro: Building an abuse-resistant commenting system

Abusive and obnoxious online commentary has weighed down the Internet for too long. Here’s how to put an end to trolling.

online abuse
Credit: Thinkstock

Note: This post is the first of a two-part look at solving the problem of Internet trolling and abuse via UI and UX. See part two: How UX/UI designers and developers can cure the common internet troll

“stupid woman talks bullsh*t”
“i thought i was gonna see t*ts”
“Do you see the black girls armpit hair eww”

These are comments from YouTube users on a video piece from a couple of years ago by a vlogger friend of mine.  At the time of this writing, the piece, an edutainment episode chock-full of scientific information, has 1.3 million hits.  My friend herself, who narrates and stars in it, is a Ph.D. with two books and a TED talk to her name.  She blogs and hosts regularly for eminent, popular scientific publications and programs and can reasonably be said to be at the top of her profession; she’s also a great mother, friend and wit.

And yet, none of my readers will be the least bit surprised to discover that members of the anonymous (or exclusively screen-named) public have bescumbered the video’s comments section with material like the above.  In referring to it, I as a writer can safely sidestep the question of whether or how to substantiate that such things were in fact said, because it is common knowledge that in such a context, things like them are said all the time, and worse.

Much worse.

Especially for female contributors to new media (so-called, though both the media and its problems are now embarrassingly old for this conversation to be necessary), threats of physical and sexual violence are almost routine.  In comments sections everywhere, levels of nastiness rarely seen otherwise outside Tarantino films are commonly reached via hair-trigger.

Without question, vlogging, blogging and virtual communities in general (anonymous and otherwise) have transformed modern communication in a number of ways.  It is an industry-wide shame that this pattern of abusive communication -- this waste-stream of unenlightened and dehumanizing comments paralleling so much interesting, important or even merely entertaining content -- is a persistent one of those ways.

To risk stating the obvious, all of this communication rides atop technology.  But technology, so far, has not addressed the problem.  There has been some work done on content mediation (manual and automated) for online communities, but it generally focuses on identifying and blocking bad human actors from access to a community.  What there has been none of, that I can find in my researches at any rate, is an effort, which surely ought to be industry-wide, to address this problem via user interface and interaction design.

The idea that the quality of communication can, in fact, be improved by algorithm is perhaps not as obvious.  In other contexts, however, it is common practice.  It is particularly true for endeavors that seek to improve sensitive communication between partners, colleagues, etc., although I’m unaware of any that employ digital technology to do so.  

Marshall Rosenberg’s widely-adopted Nonviolent Communication program, for example, essentially commits its practitioners to decode their own communication into observations, feelings, needs, and requests.  In the process, their essential meaning is preserved and highlighted, decontaminated of defensiveness, abuse, and dishonesty.

Very well: We have a code that has been shown to lift sensitive communication out of the gutter, and we have rampant gutter-level communication conveyed by humans via code.  I can think of no dots more needful of connecting in information design today.

Last year, following the Isla Vista massacre and subsequent storm of online gendered vitriol, I and a few friends began trying to investigate this angle, hoping to find an industry working group, standards proposal, or the like.  To date, we have found nothing – nothing at Facebook, nothing at Google, nothing in academia.

Later this year, we hope to prototype an abuse-resistant online community based in part on an interactive user flow that enforces some basic standards of communication.  Perhaps in this fashion we will begin helping to make a dent in a horrible problem that only, after all, exists because its host web applications allow it.

Part two of this article takes a closer look at how UI and UX design can help address the issue.

This article is published as part of the IDG Contributor Network. Want to Join?

ITWorld DealPost: The best in tech deals and discounts.
Shop Tech Products at Amazon