LinkedIn's Scarlet Letter - Social Media Clarity Podcast
LinkedIn's Scarlet Letter - Episode 14
Marc, Scott, and Randy discuss LinkIn's so-called SWAM (Site Wide Automatic Moderation) policy and Scott provides some tips on moderation system design...
[There is no news this week in order to dig a little deeper into the nature of moderating the commons (aka groups).]
Additional Links:
- John Mark Troyer, Social Media Evangelist at VMware shares his passion for the podcast.
- Forbes: LinkedIn Ruckus Continues As Victims Of Site-Wide Moderation Defect
- Has LinkedIn changed its SWAM policy - And not told anyone? at Brainstorm Digital
- LinkedIn Help: Removing Spam from Your Group
Block & Delete takes the member out of the group and places them on the Blocked tab, which prevents them from requesting to join again. It also deletes all past contributions. Please be aware that when you select Block & Delete for a group member, this will result in automatic moderation of all their future posts in any group site-wide. Read more about removing spam from your group. (emphasis ed.)
- LinkedIn Help: Why are my posts going through moderation in all of my groups?
3/11/14: Note: The mechanism that changes a member's posting permissions is automated and cannot be reversed by LinkedIn Customer Support. We cannot provide a list of which groups blocked a member due to privacy restrictions. (emphasis ed.)
- Elinor Ostrom's Nobel Prize winning work in Governing the Commons is must-read material for anyone building groups software.
- Elinor Ostrom's 8 Principles for Managing A Commons
- We also suggest Order without Law: How Neighbors Settle Disputes by Robert Ellickson
- and Cows, Pigs, Wars, and Witches - The Riddles of Culture by Marvin Harris
Transcript
John Marc Troyer: Hi, this is John Mark Troyer from VMware, and I'm listening to the Social Media Clarity podcast.
Randy: Welcome to episode 14 of the Social Media Clarity podcast. I'm Randy Farmer.
Scott: I'm Scott Moore.
Marc: I'm Marc Smith.
Marc: Increasingly, we're living our lives on social-media platforms in the cloud, and in order to protect themselves, these services are deploying moderation systems, regulations, tools to control spammers and abusive language. These tools are important, but sometimes the design of these tools have unintended consequences. We're going to explore today some choices made by the people at LinkedIn in their Site Wide Automatic Moderation system known as SWAM. The details of this service are interesting, and they have remarkable consequences, so we're going to dig into it as an example of the kinds of choices and services that are already cropping up on all sorts of sites, but this one's particularly interesting because the consequence of losing access to LinkedIn could be quite serious. It's a very professional site.
Scott: SWAM is the unofficial acronym for Site Wide Automated Moderation, and it's been active on LinkedIn for about a year now. Its intent is to reduce spam and other kinds of harassment in LinkedIn groups. It's triggered by a group owner or a group moderator removing the member or blocking the member from the group. The impact that it has is that it becomes site wide. If somebody is blocked in one group, then they are put into what's called moderation in all groups. That means that your posts do not automatically show up when you post, but they go into a moderation queue and have to be approved before the rest of the group can see them.
Randy: Just so I'm clear, being flagged in one group means that none of your posts will appear in any other LinkedIn group without explicit approval from the moderator. Is that correct?
Scott: That's true. Without the explicit approval of the group that you're posting to, your posts will not be seen.
Randy: That's interesting. This reminds me of the Scarlet Letter from American Puritan history. When someone was accused of a crime, specifically adultery, they would be branded so that everyone could tell. Regardless of whether or not they were a party to the adultery, a victim, you were cast out, and this puts a kind of cast-out mechanism, but unlike then, which was an explicit action that the community all knew about, a moderator on a LinkedIn group could do this by accident.
Scott: From a Forbes article in February, someone related the story that they had joined a LinkedIn group that was for women, and despite it having a male group owner and not explicitly stating that the group was for women only. The practice was that if men joined the group and posted, the owner would simply flag the post just as a way of keeping it to being a woman-only group. Well, this has the impact that simply because the rules were not clear and the behavior was not explicit, then this person was basically put into moderation for making pretty much an honest mistake.
Randy: And this person was a member of multiple groups and now their posts would no longer automatically appear. In fact, there's no way to globally turn this off, to undo the damage that was done, so now we have a Scarlet Letter and a non-existent appeals process, and this is all presumably to prevent spam.
Scott: Yeah, supposedly.
Randy: So it has been a year. Has there been any response to the outcry? Have there been any changes?
Scott: Yes. It seems that LinkedIn is taking a review. They've made a few minor changes. The first notable one is that moderation is temporary, so it can last a undetermined amount of time up to a few weeks. The second one is that it seems that they've actually expanded how you can get flagged to include any post, contribution, comments that are marked as spam or flagged as not being relevant to the group.
Randy: That's pretty amazing. First of all, shortening the time frame doesn't really do anything. You're still stuck with a Scarlet Letter, only it fades over months.
Marc: So there's a tension here. System administrators want to create code that essentially is a form of law. They want to legislate a certain kind of behavior, and they want to reduce the cost of people who violate that behavior, and that seems sensible. I think what we're exploring here is unintended consequences and the fact that the design of these systems seem to lack some of the features that previous physical world or legal relationships have had, that you get to know something about your accuser. You get to see some of the evidence against you. You get to appeal. All of these are expensive, and I note that LinkedIn will not tell you who or which group caused you to fall into the moderation status. They feel that there are privacy considerations there. It is a very different legal regime, and it's being imposed in code.
Randy: Yes. What's really a shame is they are trying to innovate here, where in fact there are best practices that avoid these problems. The first order of best practice is to evaluate content, not users. What they should be focusing on is spam detection and behavior modification. Banning or placing into moderation, what they're doing, does neither. It certainly catches a certain class of spammer, but, in fact, the spam itself gets caught by the reporting. Suspending someone automatically from the group they're in or putting them into auto-moderation for that group if they're a spammer should work fine.
Also, doing traffic analysis on this happening in multiple groups in a short period of time is a great way to identify a spammer and to deal with them, but what you don't need to do is involve volunteer moderators in cleaning up the exceptions. They can still get rid of the spammers without involving moderators handling the appeals because, in effect, there is an appeals process. You appeal to every single other group you're in, which is really absurd because you've not done anything wrong there - you may be a heavy contributor there. We've done this numerous places: I've mentioned before on the podcast my book Building Web Reputation Systems. Chapter 10 describes how we eliminated spam from Yahoo Groups without banning anyone.
Marc: I would point us to the work of Elinor Ostrom, an economist and social theorist, who explored the ways that groups of people can manage each other's behavior without necessarily imposing draconian rules. Interestingly, she came up with eight basic rules for managing the commons, which I think is a good metaphor for what these LinkedIn discussion groups are.
- One is that there is a need to "Define clear group boundaries." You have to know who's in the group and who's not in the group. In this regard, services like LinkedIn work very well. It's very clear that you are either a member or not a member.
- Rule number two, "Match rules governing use of common goods to local needs and conditions." Well, we've just violated that one. What didn't get customized to each group is how they use the ban hammer. What I think is happening that comes up in the stories where you realize somebody has been caught in the gears of this mechanism is that people have different understandings of the meaning of the ban hammer. Some of them are just trying to sweep out what they think of as just old content, and what they've just done is smeared a dozen people with a tar that will follow them around LinkedIn.
- Three is that people should "Ensure that those affected by the rules can participate in modifying the rules." I agree that people have a culture in these groups, and they can modify the rules of that culture, but they aren't being given the options to tune how the mechanisms are applied and what the consequences of those mechanisms are. What if I want to apply the ban hammer and not have it ripple out to all the other groups you're a member of?
Randy: Well, and that's section four.
- Marc: Indeed, which reads, "Make sure the rule-making rights of community members are respected by outside authorities." There should be a kind of federal system in which group managers and group members choose which set of rules they want to live under, but interestingly,
- number five really speaks to the issue at hand. "Develop a system carried out by community members for monitoring members' behavior."
Randy: I would even refine that a little bit online, which is to not only monitor, but to help shape members' behavior so that people are helping people conform to their community.
- Marc: Indeed, because this really ties into the next one, which may be the real problem here at the core. "Use graduated sanctions for rule violators." That seems not to be in effect here with the LinkedIn system. You can make a small mistake in one place and essentially have the maximal penalty applied to you. I'm going to suggest that number seven also underscores your larger theme, which is about shaping behavior rather than canceling out behavior.
- Number seven is, "Provide accessible low-cost means for dispute resolution", which is to say bring the violators back into the fold. Don't just lock them up and shun them.
Randy: Specifically on dispute resolution, which includes an appeals process, for Yahoo Answers, we implemented one which was almost 100% reliable in discovering who a spammer was. If someone had a post hidden, an email would be sent to the registered email address saying, "Your post has been hidden," and takes you through the process for potentially appealing. Now, what was interesting is if the email arrived at a real human being, it was an opportunity to help them improve their behavior. If they could edit, they could repost.
For example, this is what we do at Discourse.org if you get one of these warnings. You are actually allowed to edit the offensive post and repost it with no penalties. The idea is to improve the quality of the interaction. It turns out that all spammers, to a first approximation on Yahoo Answers, had bogus email addresses, so the appeal would never be processed and the object would stay hidden.
- Well, I'm going to do number eight, and eight says, "Build responsibility for governing the common resource in nested tiers from the lowest level up to the entire interconnected system." It doesn't say let the entire interconnected system have one rule that binds them all.
Randy: And it also says from the bottom up. I actually approve of users marking postings as spam and having that content hidden and moving some reputation around. Where we run into trouble is when that signal is amplified by moving it up the interconnected system and then re-propagated across the system. The only people who have to know whether or not someone's a spammer is the company LinkedIn. No other moderator needs to know. Either the content is good or it's not good.
Marc: Elinor Ostrom's work is really exciting, and she certainly deserved the Nobel Prize for it because she really is the empirical answer to that belief that anything that is owned by all is valued by none. That's a phrase that leads people to dismiss the idea of a commons, to believe that it's not possible to ethically and efficiently steward something that's actually open, public, a common resource, and of course, the internet is filled with these common resources. Wikipedia is a common resource. A message board is a common resource.
Like the commonses that Ostrom studied, a lot of them are subject to abuse, but what Ostrom found was that there were institutions that made certain kinds of commons relationships more resilient in the face of abuse, and she enumerated eight of them. I think the real message is that, given an opportunity, people can collectively manage valuable resources and give themselves better resources as a result by effectively managing the inevitable deviance, the marginal cases where people are trying to make trouble, but most people are good.
Scott: Your tips for this episode are aimed at community designers and developers who are building platforms that allow users to form their own groups.
- First, push the power down - empower local control and keep the consequences local.
- Give group owners the freedom to establish and enforce their own rules for civil discourse.
- You will still be able to keep content and behavior within your service's overall terms of use and allow a diversity of culture within different groups.
- If, as a service, you detect broader patterns of (content or user) behavior, you can take additional action. But respect that different groups may prefer different behaviors, so be careful to not allow one or even a small set of groups dictate consequences that impact all other groups.
- Now that we are giving local control, be sure to allow groups to moderate content separately from moderating members.
- As often as not, good members sometimes misstep and make bad posts. Especially, if they are new to a group.
- Punishing someone outright can cost communities future valuable members.
- By separating content from members, the offending content can be dealt with and the member help to fit the local norms.
- Ask community managers and you will hear stories of a member who started off on the wrong foot and eventually became a valued member of their community. This is common. Help group moderators avoid punishing people who make honest mistakes.
- When it comes to dispute resolution between members and group moderators. One way to make it easy is to mitigate the potential dispute in the first place.
- Make it easy for moderators to set behavior expectations by posting their local rules and guidelines and build in space in your design where local rules can be easily accessed by members.
- Also give group owners the option of requiring an agreement to the local rules before a member is allowed to join the group.
- AND Make it easy to contact moderators before a member posts and encourage them to ask about posts before even posting.
- NOW If the group platform offers a moderation queue, give clear notifications to moderators about pending posts so reviewing the queue is easier to include in their work-flow. Because moderating communities does have a work-flow.
- And finally, build a community of group owners and moderators -- and LISTEN to them as they make recommendations and request tools that help them foster their own local communities. The more you help them build successful communities, the more successful your service or platform will be.
Randy: That was a great discussion. We'd like the people at LinkedIn to know that we're all available as consultants if you need help with any of these problems.
Marc: Yeah, we'll fix that for you.
Randy: We'll sign off for now. Catch you guys later. Bye.
Scott: Good-bye.
Marc: Bye-bye.
[Please make comments over on the podcast's episode page.]