Real identity helps foster healthy online communities

Comment trolls — the nasty, often race baiting, empty-headed-bashing people who often pollute online dialogue — are the bane of news sites that allow comments on stories.

One of the most effective, and proven methods, for bringing such behavior under control is for a newspaper staff members to closely monitor comments and have the power to delete and ban. It’s kind of like fighting graffiti — the quicker you paint over the marred wall, the less likely it is to be hit again.

Some good technology, such as profanity filters, comment rating and reputation, help, too. That only gets you so far.

I’ve long believed that the most effective, and so far least employed, tool is tying comments to identity.

When we brought participation to Bakersfield.com, we tied participation to “persona,” by that we meant allowing a person to create whatever identity he or she wanted, weather real or pseudonymous. The theory being that if people have an identity to protect, they will behave better.

My desire to do that grew out of my experience with comments in Ventura.

Here’s some psychological research to suggest that this is the right track (via techcrunch):

Social psychologists have known for decades that, if we reduce our sense of our own identity – a process called deindividuation – we are less likely to stick to social norms. For example, in the 1960s Leon Mann studied a nasty phenomenon called “suicide baiting” – when someone threatening to jump from a high building is encouraged to do so by bystanders. Mann found that people were more likely to do this if they were part of a large crowd, if the jumper was above the 7th floor, and if it was dark. These are all factors that allowed the observers to lose their own individuality.

Social psychologist Nicholas Epley argues that much the same thing happens with online communication such as email. Psychologically, we are “distant” from the person we’re talking to and less focused on our own identity. As a result we’re more prone to aggressive behaviour, he says.

So, the more we can engineer participation so that we close the gap between loss of individuality and sense of identity, the better chance we have at maintaining civil dialogue.

After leaving Bakersfield, I came to the conclusion that “persona” wasn’t enough. This is no reflection on anything that has happened in Bakersfield. I just have come to believe that news sites should require real identity. No more explicit acceptance of pseudonymous participation.

Journalisticly, I think this is the responsible thing to do. Many of the conversations news sites host are important to the civic life of our communities, and people who read these comments have a right to know who their friends, neighbors and leaders (no sock puppets) are who drive the conversations.

In a news story, we wouldn’t allow an anonymous comment without a good reason (and there are far fewer anonymous sources in local news columns than major news outlets), so why allow them, unvetted, in comments on stories?

Comments on stories are supposed to serve a primary purpose of advancing the story, not just providing a forum for rants and raves (though, by default, they do that, too). Anonymity, pseudonymous or otherwise, runs counter to the spirit of robust, honest, civic conversation.

That’s part of the journalistic case for requiring real identity.

But returning to the psychological case above, it seems to me that if we make our forums a place where people expect to be dealing with each other on a real identity basis, especially in smaller communities, won’t they more often naturally be more civil?

While the psychological research makes it apparent that even persona is better than anonymity, real identity should work even better. I think.

As for enforcing real identity:

  • Facebook is kind of showing us the way, and lessening the barrier for full disclosure (especially for younger readers, who are less hung up on privacy than older readers).
  • In my own experience with user registration systems, local users of a local newspaper sites are surprisingly honest about their real names and addresses when they register to read news. I think we’ll see only a slight drop off when registration is tied to participation.
  • When you require real identity in terms and conditions, you know have another tool to justify banning trolls. Trolls almost always try to game the system, and they’re easy to spot.
  • Generally, it’s easy to spot people who are trying to participate anonymously. You can spot check your registration database and delete obviously bogus accounts. It’s quick and easy to do in a well designed system.

I think over time, we are going to see fewer and fewer online communities that allow completely anonymous participation. Most are going to follow the persona model or the real identity model. Users will increasingly accept these requirements, either because they are common, or because they recognize the value of identity in maintaining a vibrant community.

Most people don’t like seeing their communities trashed. They are more than willing to help us keep things neat and tiddy, but they also look to us to provide the manpower and technological solutions that makes running healthy communities possible.

BTW: You’ll note that nowhere in this post did I use the term “virtual community.” In healthy communities, there is nothing virtual about them. They are very real, and very important. The old term “virtual community” demeans online communities, which are just as important to the participants and members as any off-line community.

9 thoughts on “Real identity helps foster healthy online communities

  1. From your mouth to god’s ears. Registration’s a must. We MSM Webbers can stake our claim on credibility and transparency in our online community conversation — and that means we can offer our audiences the one thing they can’t really get elsewhere: a safe place to have a civil conversation. And, I totally believe audience will increase, not decrease, as our users come to understand that we are doing the right things. Sooner, better. lgc

  2. Interesting post, as always.

    My question on this has always been, how do you verify that someone is who he says he is? What stops a user — particularly an abusive user — from using a fictional name, or even worse, the name of another real person?

  3. Dean, my answer is at the end of the post.

    It’s extremely unusual for an abusive user to go to much effort to hide their identity. They’re easy to spot.

  4. I generally agree about the nature of the obviously abusive user; they reveal themselves without much prompting.

    My point it that without verification of identity, it is misleading to say that requiring users to fill out first and last name fields in a form constitutes “real identity”. What you end up with is something that looks like a real name, but for all the site knows, is no less anonymous than a user name. Good registration systems require confirmation of e-mail addresses; if we’re going to call this “real identity,” we ought to have some way to verify that — perhaps a token credit card transaction.

    Short of that, we need a different name for what you’re suggesting.

  5. You should also be collecting addresses and phone numbers and verifying e-mail addresses.

    Regularly CASS your registration database and remove any registrations that do not certify.

    Phone verify any seemingly authentic, but potentially not, registrations. Spot check others.

    I feel our only obligation in this regard is to use due diligence and be reasonably sure of the accuracy of the information, and be vigilant about removing all the false registrations we identify.

    We should be transparent about our processes and methods.

  6. The registration system I built uses an e-mail confirmation to at least insure that a valid e-mail is being used, but of course, free hotmail, gmail and yahoo addresses only take a moment to acquire. We viewed asking for real names, phone numbers, addresses and the like as too high a barrier to entry for a general news site.

    We’ve had some banned users come back over and over again under different e-mails, but as you point out, they are generally easy to spot unless they tone it down and stay under the radar (which they tend to do after 2 or 3 bannings). Many of the ones who come back and don’t tone it down are arrogant enough to use usernames like “3dtimeback” and the like. But when our community stops complaining about them, we leave them be.

    We’ve discussed reputation as a concept, but haven’t settled on a clear way forward in that area.

  7. Difficult to enforce identity verification, but somehow tying registration of email validation (and actively reining in or removing scofflaws) seems to bring a civility lacking in unverified, drive-by participation. I believe the question remains rhetorical at this point.

    I know, however, that switching to registration significantly reduces participation, as we went from the Wild West to Registered Commenters only when we upgraded our CMS. We haven’t tracked it, as the site traffic has grown substantively, but we most like lost 50% of our comments.

    –Randy Campbell

  8. Dean, your hypocrisy is obvious to those who have had dealings with your editorial duties. Specifically, when you block somebody from posting based on a whim or incorrect assumptions on your part, you are worse than the bloggers you block.

    Have enough managerial ability and common sense to send an e-mail to those you’ve blocked telling them they’ve been blocked rather than let them believe there is a technical problem, continute to attempt a post, and then ban them for not being able to read your mind or as you do, make assumptions.

  9. If you truly want to foster real identity, have Dean Betz post his e-mail address near the links for comments so people who suspect they’ve been blocked can contact Dean to do his job for him and correct any misunderstandings on his part.

Leave a Reply