How UX/UI designers and developers can cure the common internet troll

Combating the pandemic of trolling and online abuse

Ballmer town hall

At no live town hall or performance would we expect to find hecklers streaming obscenities at the presenter or each other.  Why would we accept this behavior online?

Credit: Nick Barber

In my last post I used this space to begin looking at the problem of computer-facilitated abusive communication in online communities, and to propose that UI and interaction design could make a positive difference.  This week I’m going to look at that proposal a little more closely.

What would a technology-mitigated (as opposed to a merely technology-mediated) user experience of commenting look like?  Well, given that commenting is essentially human / human interaction rather than strictly human / computer interaction (or rather, given that it’s more directly human / human than most UX), one might expect to find instructive examples in offline human life, and we do.

At no live public performance, lecture, press conference or town hall do we expect to find attendees who, within the format of whatever question-answer period might follow, stream obscenities at those onstage or each other.  Moreover, no such event would lack human security to remove anyone depraved, inebriated or antisocial enough to do so.  Naturally the results might be different if comments and questions were deposited anonymously, ransom-note style, to be aired to all subsequently by a third party, but no venue is likely to subject themselves to contributions like that.

In direct human interaction, between groups as well as between individuals, obnoxiousness is inhibited by the spectre of repercussions to the human relationship in question, which is generally of some value to the potentially obnoxious individual.  In online interaction, we have the opportunity to inhibit obnoxiousness via technology.

What if Big Data got good at mining communications to identify patterns of threatening, demeaning or inflammatory speech?  It is already mined to identify patterns for language translation, disease outbreak, and (at least incipiently) suicide risk.  

What if participants had to agree to face exposure of their real identity, ip address, and contact information if identified as violators by human moderators, analogous to the exposure risked by violators at live events?  Yes, some might evade such exposure, but the effort of doing so might prove prohibitive to most.

What if comments were interactively parsed into observations, interpretations, feelings, needs and requests like various communications techniques suggest, before they ever managed to do their damage?  Could we design a UX that would make this crucial step of communication easy and friendly?

Obviously, such endeavours could proceed to the point where they posed an undue burden on an online community and its participants; seemingly no one knows where that point is yet, because no one has begun the process.  We would likely know when we passed it if viewers and commenters began to drop out of such communities (as overly strict IRL community laws might drive away citizens in a free and mobile society).

Media outlets might be justly concerned about this potential loss.  They should also consider, however, that until that point is reached there is vast potential for improving the experience of providing content to such a community.  And all else being equal, the better the experience of providing content to a given community, the likelier that community is to attract great content, and audiences hungry for it.

But to return to the post-performance question-answer session analogy – no one expects completely unfettered participation in such a session.  Venues determine who accesses their microphones, and all participants implicitly consent to this order so that something other than chaos may be achieved.  If, as in some ostensible town halls, access is seen to be too restrictive, remedies are to be sought in additional civic venues.

This approach has not resulted in perfect community historically, but it has avoided the paradigm of free-for-all abuse we see online and all the damage that paradigm inflicts.  As designers and developers, we can apply its lessons to our online applications and thereby relieve the world of a real, widespread and pressing problem.

To address briefly the notion that such modifications might infringe on guaranteed freedoms of speech:  In my opinion, the First Amendment of the United States Constitution is one of the pre-eminent legal achievements in all of human history.  I would, and when appropriate I do, vehemently defend it and its analogs in civil society worldwide.  But of course that amendment in no way forces ITworld to give me a platform for my ideas on their Web site, nor should it.  And neither does the law provide any guaranteed platform to any reader for commenting on this blog, or any other content, for that matter.  

In both cases, access to the platform and rules for its use are discretions of the platform owner.  I for one prefer that ITworld, and any other platform, apply its discretion in ways that encourage real, meaningful discourse in comments sections.  

In short - user interface design has a vital role to play if such an improved experience of commenting and community is to become the norm.  It’s time our profession stepped up to the bar and took this problem on in a meaningful way.

This article is published as part of the IDG Contributor Network. Want to Join?

ITWorld DealPost: The best in tech deals and discounts.
Shop Tech Products at Amazon