Main

March 19, 2014

LinkedIn's Scarlet Letter - Social Media Clarity Podcast

LinkedIn's Scarlet Letter - Episode 14

image

Marc, Scott, and Randy discuss LinkIn's so-called SWAM (Site Wide Automatic Moderation) policy and Scott provides some tips on moderation system design...

[There is no news this week in order to dig a little deeper into the nature of moderating the commons (aka groups).]

Additional Links:

Transcript

John Marc Troyer: Hi, this is John Mark Troyer from VMware, and I'm listening to the Social Media Clarity podcast.

Randy: Welcome to episode 14 of the Social Media Clarity podcast. I'm Randy Farmer.

Scott: I'm Scott Moore.

Marc: I'm Marc Smith.

Marc: Increasingly, we're living our lives on social-media platforms in the cloud, and in order to protect themselves, these services are deploying moderation systems, regulations, tools to control spammers and abusive language. These tools are important, but sometimes the design of these tools have unintended consequences. We're going to explore today some choices made by the people at LinkedIn in their Site Wide Automatic Moderation system known as SWAM. The details of this service are interesting, and they have remarkable consequences, so we're going to dig into it as an example of the kinds of choices and services that are already cropping up on all sorts of sites, but this one's particularly interesting because the consequence of losing access to LinkedIn could be quite serious. It's a very professional site.


Scott: SWAM is the unofficial acronym for Site Wide Automated Moderation, and it's been active on LinkedIn for about a year now. Its intent is to reduce spam and other kinds of harassment in LinkedIn groups. It's triggered by a group owner or a group moderator removing the member or blocking the member from the group. The impact that it has is that it becomes site wide. If somebody is blocked in one group, then they are put into what's called moderation in all groups. That means that your posts do not automatically show up when you post, but they go into a moderation queue and have to be approved before the rest of the group can see them.

Randy: Just so I'm clear, being flagged in one group means that none of your posts will appear in any other LinkedIn group without explicit approval from the moderator. Is that correct?

Scott: That's true. Without the explicit approval of the group that you're posting to, your posts will not be seen.

Randy: That's interesting. This reminds me of the Scarlet Letter from American Puritan history. When someone was accused of a crime, specifically adultery, they would be branded so that everyone could tell. Regardless of whether or not they were a party to the adultery, a victim, you were cast out, and this puts a kind of cast-out mechanism, but unlike then, which was an explicit action that the community all knew about, a moderator on a LinkedIn group could do this by accident.

Scott: From a Forbes article in February, someone related the story that they had joined a LinkedIn group that was for women, and despite it having a male group owner and not explicitly stating that the group was for women only. The practice was that if men joined the group and posted, the owner would simply flag the post just as a way of keeping it to being a woman-only group. Well, this has the impact that simply because the rules were not clear and the behavior was not explicit, then this person was basically put into moderation for making pretty much an honest mistake.

Randy: And this person was a member of multiple groups and now their posts would no longer automatically appear. In fact, there's no way to globally turn this off, to undo the damage that was done, so now we have a Scarlet Letter and a non-existent appeals process, and this is all presumably to prevent spam.

Scott: Yeah, supposedly.

Randy: So it has been a year. Has there been any response to the outcry? Have there been any changes?

Scott: Yes. It seems that LinkedIn is taking a review. They've made a few minor changes. The first notable one is that moderation is temporary, so it can last a undetermined amount of time up to a few weeks. The second one is that it seems that they've actually expanded how you can get flagged to include any post, contribution, comments that are marked as spam or flagged as not being relevant to the group.

Randy: That's pretty amazing. First of all, shortening the time frame doesn't really do anything. You're still stuck with a Scarlet Letter, only it fades over months.

Marc: So there's a tension here. System administrators want to create code that essentially is a form of law. They want to legislate a certain kind of behavior, and they want to reduce the cost of people who violate that behavior, and that seems sensible. I think what we're exploring here is unintended consequences and the fact that the design of these systems seem to lack some of the features that previous physical world or legal relationships have had, that you get to know something about your accuser. You get to see some of the evidence against you. You get to appeal. All of these are expensive, and I note that LinkedIn will not tell you who or which group caused you to fall into the moderation status. They feel that there are privacy considerations there. It is a very different legal regime, and it's being imposed in code.

Randy: Yes. What's really a shame is they are trying to innovate here, where in fact there are best practices that avoid these problems. The first order of best practice is to evaluate content, not users. What they should be focusing on is spam detection and behavior modification. Banning or placing into moderation, what they're doing, does neither. It certainly catches a certain class of spammer, but, in fact, the spam itself gets caught by the reporting. Suspending someone automatically from the group they're in or putting them into auto-moderation for that group if they're a spammer should work fine.

Also, doing traffic analysis on this happening in multiple groups in a short period of time is a great way to identify a spammer and to deal with them, but what you don't need to do is involve volunteer moderators in cleaning up the exceptions. They can still get rid of the spammers without involving moderators handling the appeals because, in effect, there is an appeals process. You appeal to every single other group you're in, which is really absurd because you've not done anything wrong there - you may be a heavy contributor there. We've done this numerous places: I've mentioned before on the podcast my book Building Web Reputation Systems. Chapter 10 describes how we eliminated spam from Yahoo Groups without banning anyone.

Marc: I would point us to the work of Elinor Ostrom, an economist and social theorist, who explored the ways that groups of people can manage each other's behavior without necessarily imposing draconian rules. Interestingly, she came up with eight basic rules for managing the commons, which I think is a good metaphor for what these LinkedIn discussion groups are.

  1. One is that there is a need to "Define clear group boundaries." You have to know who's in the group and who's not in the group. In this regard, services like LinkedIn work very well. It's very clear that you are either a member or not a member.
  2. Rule number two, "Match rules governing use of common goods to local needs and conditions." Well, we've just violated that one. What didn't get customized to each group is how they use the ban hammer. What I think is happening that comes up in the stories where you realize somebody has been caught in the gears of this mechanism is that people have different understandings of the meaning of the ban hammer. Some of them are just trying to sweep out what they think of as just old content, and what they've just done is smeared a dozen people with a tar that will follow them around LinkedIn.
  3. Three is that people should "Ensure that those affected by the rules can participate in modifying the rules." I agree that people have a culture in these groups, and they can modify the rules of that culture, but they aren't being given the options to tune how the mechanisms are applied and what the consequences of those mechanisms are. What if I want to apply the ban hammer and not have it ripple out to all the other groups you're a member of?

    Randy: Well, and that's section four.

  4. Marc: Indeed, which reads, "Make sure the rule-making rights of community members are respected by outside authorities." There should be a kind of federal system in which group managers and group members choose which set of rules they want to live under, but interestingly,
  5. number five really speaks to the issue at hand. "Develop a system carried out by community members for monitoring members' behavior."

    Randy: I would even refine that a little bit online, which is to not only monitor, but to help shape members' behavior so that people are helping people conform to their community.

  6. Marc: Indeed, because this really ties into the next one, which may be the real problem here at the core. "Use graduated sanctions for rule violators." That seems not to be in effect here with the LinkedIn system. You can make a small mistake in one place and essentially have the maximal penalty applied to you. I'm going to suggest that number seven also underscores your larger theme, which is about shaping behavior rather than canceling out behavior.
  7. Number seven is, "Provide accessible low-cost means for dispute resolution", which is to say bring the violators back into the fold. Don't just lock them up and shun them.

    Randy: Specifically on dispute resolution, which includes an appeals process, for Yahoo Answers, we implemented one which was almost 100% reliable in discovering who a spammer was. If someone had a post hidden, an email would be sent to the registered email address saying, "Your post has been hidden," and takes you through the process for potentially appealing. Now, what was interesting is if the email arrived at a real human being, it was an opportunity to help them improve their behavior. If they could edit, they could repost.

    For example, this is what we do at Discourse.org if you get one of these warnings. You are actually allowed to edit the offensive post and repost it with no penalties. The idea is to improve the quality of the interaction. It turns out that all spammers, to a first approximation on Yahoo Answers, had bogus email addresses, so the appeal would never be processed and the object would stay hidden.

  8. Well, I'm going to do number eight, and eight says, "Build responsibility for governing the common resource in nested tiers from the lowest level up to the entire interconnected system." It doesn't say let the entire interconnected system have one rule that binds them all.

    Randy: And it also says from the bottom up. I actually approve of users marking postings as spam and having that content hidden and moving some reputation around. Where we run into trouble is when that signal is amplified by moving it up the interconnected system and then re-propagated across the system. The only people who have to know whether or not someone's a spammer is the company LinkedIn. No other moderator needs to know. Either the content is good or it's not good.

Marc: Elinor Ostrom's work is really exciting, and she certainly deserved the Nobel Prize for it because she really is the empirical answer to that belief that anything that is owned by all is valued by none. That's a phrase that leads people to dismiss the idea of a commons, to believe that it's not possible to ethically and efficiently steward something that's actually open, public, a common resource, and of course, the internet is filled with these common resources. Wikipedia is a common resource. A message board is a common resource.

Like the commonses that Ostrom studied, a lot of them are subject to abuse, but what Ostrom found was that there were institutions that made certain kinds of commons relationships more resilient in the face of abuse, and she enumerated eight of them. I think the real message is that, given an opportunity, people can collectively manage valuable resources and give themselves better resources as a result by effectively managing the inevitable deviance, the marginal cases where people are trying to make trouble, but most people are good.


Scott: Your tips for this episode are aimed at community designers and developers who are building platforms that allow users to form their own groups.

  1. First, push the power down - empower local control and keep the consequences local.
  2. Give group owners the freedom to establish and enforce their own rules for civil discourse.
  3. You will still be able to keep content and behavior within your service's overall terms of use and allow a diversity of culture within different groups.
  4. If, as a service, you detect broader patterns of (content or user) behavior, you can take additional action. But respect that different groups may prefer different behaviors, so be careful to not allow one or even a small set of groups dictate consequences that impact all other groups.
  5. Now that we are giving local control, be sure to allow groups to moderate content separately from moderating members.
  6. As often as not, good members sometimes misstep and make bad posts. Especially, if they are new to a group.
  7. Punishing someone outright can cost communities future valuable members.
  8. By separating content from members, the offending content can be dealt with and the member help to fit the local norms.
  9. Ask community managers and you will hear stories of a member who started off on the wrong foot and eventually became a valued member of their community. This is common. Help group moderators avoid punishing people who make honest mistakes.
  10. When it comes to dispute resolution between members and group moderators. One way to make it easy is to mitigate the potential dispute in the first place.
  11. Make it easy for moderators to set behavior expectations by posting their local rules and guidelines and build in space in your design where local rules can be easily accessed by members.
  12. Also give group owners the option of requiring an agreement to the local rules before a member is allowed to join the group.
  13. AND Make it easy to contact moderators before a member posts and encourage them to ask about posts before even posting.
  14. NOW If the group platform offers a moderation queue, give clear notifications to moderators about pending posts so reviewing the queue is easier to include in their work-flow. Because moderating communities does have a work-flow.
  15. And finally, build a community of group owners and moderators -- and LISTEN to them as they make recommendations and request tools that help them foster their own local communities. The more you help them build successful communities, the more successful your service or platform will be.

Randy: That was a great discussion. We'd like the people at LinkedIn to know that we're all available as consultants if you need help with any of these problems.

Marc: Yeah, we'll fix that for you.

Randy: We'll sign off for now. Catch you guys later. Bye.

Scott: Good-bye.

Marc: Bye-bye.

[Please make comments over on the podcast's episode page.]

October 01, 2013

Social Networks, Identity, Psudonyms, & Influence Podcast Episodes

Here are the first 4 episodes of The Social Media Clarity Podcast:

  1. Social Network: What is it, and where do I get one? (mp3) 26 Aug 2013
  2. HuffPo, Identity, and Abuse (mp3) 5 Sep 2013  NEW
  3. Save our Pseudonyms! (Guest: Dr. Bernie Hogan) (mp3) 16 Sep 2013  NEW
  4. Influence is a Graph (mp3) 30 Sep 2013  NEW
Subscribe via iTunes

Subscribe via RSS

Listen on Stitcher

Like us on Facebook

May 05, 2010

Web2.0 Expo Talk — 5 Reputation Missteps

The slides from our presentation yesterday at the Web 2.0 Expo in San Francisco. We will soon be adding all speaker's notes into the full version on Slideshare.

October 05, 2009

Do Trolls Eat Spam?

What's the difference between trolling behavior and plain old spam? It's a subtle distinction, but critical to understanding when combating either. We classify as spam those communications that are indiscriminate, unwanted and broadcast to a large audience.

Fortunately, those same characteristics that mark a communication as spam—its indiscrimination, its lack of relation to the conversation at hand, its overtly commercial appeals—also make it stand out. You're probably easily able to identify spam on sight, after just a quick initial inspection. We can teach these same tricks to machines. Although spammers constantly change their tactics to evade detection, spam can generally be detected by machine methods.

Trollish behavior, however, is another matter altogether. Trolls may not be motivated by financial motives—more likely, they're craving attention, and motivated by a desire to disrupt the larger conversation. (See egocentric incentives.) Trolls quickly realize that the best way to accomplish these goals are by non-obvious means. An extremely effective means of trolling, in fact, is to disguise your trollish intentions as real conversation.

Accomplished trolls can be so subtle that even human agents would be hard-pressed to detect them. In Chapter 7, we discuss a kind of subtle trolling in a sports context: masquerading as a fan of the opposing team. For these trolls, presenting themselves as faithful fans is part of the fun—then it's all the more disruptive once they started to trash-talk "the home team."

How do you detect for that?

It's hard for a human, and nigh-impossible for a machine. It is, however, easier to do with a number of humans. By adding consensus, and reputation-enabled methods, it is easier to reliably discern trollish behavior from sincere contribution to the community. Because reputation systems, to some degree, reflect the tastes of the community, they also have a better-than-average chance at catching behavior that transgresses against those tastes.

September 23, 2009

Party Crashers (or 'Who invited these clowns?')

Reputation Wednesday is an ongoing series of essays about reputation-related matters. This week, we look at some of the possible effects when unanticipated guests enter into your carefully-planned and modeled system. This essay is excerpted from Chapter 5.

Reputation can be a successful motivation for users to contribute large volumes of content and/or high-quality content to your application. At the very least, reputation can provide critical money-saving value to your customer care department by allowing users to prioritize the bad content for attention and likewise flag power users and content to be featured.

But mechanical reputation systems, of necessity, are always subject to unwanted or unanticipated manipulation: they are only algorithms, after all. They cannot account for the many, sometimes conflicting, motivations for users' behavior on a site. One of the strongest motivations of users who invade reputation systems is commercial. Spam invaded email. Marketing firms invade movie review and social media sites. And drop-shippers are omnipresent on eBay.

EBay drop-shippers put the middleman back into the online market: they are people who resell items that they don't even own. It works roughly like this:

  1. A seller develops a good reputation, gaining a seller feedback karma of at least 25 for selling items that she personally owns.
  2. The seller buys some drop-shipping software, which helps locate items for sale on eBay and elsewhere cheaply, or joins an online drop-shipping service that has the software and presents the items in a web interface.
  3. The seller finds cheap items to sell and lists them on eBay for a higher price than they're available for in stores but lower than other eBay sellers are selling them for. The seller includes an average or above-average shipping and handling charge.
  4. The seller sells an item to a buyer, receives payment, and sends an order for the item, along with a drop-shipping payment, to the drop-shipper (D), who then delivers the item to the buyer.

This model of doing business was not anticipated by the eBay seller feedback karma model, which only includes buyers and sellers as reputation entities. Drop-shippers are a third party in what was assumed to be a two-party transaction, and they cause the reputation model to break in various ways:

  • The original shippers sometimes fail to deliver the goods as promised to the buyer. The buyer then gets mad and leaves negative feedback: the dreaded red star. That would be fine, but it is the seller-who never saw or handled the good-that receives the mark of shame, not the actual shipping party.
  • This arrangement is a big problem for the seller, who cannot afford the negative feedback if she plans to continue selling on eBay.
  • The typical options for rectifying a bungled transaction won't work in a drop-shipper transaction: it is useless for the buyer to return the defective goods to the seller. (They never originated from the seller anyway.) Trying to unwind the shipment (the buyer returns the item to the seller; the seller returns it to the drop-shipper-if that is even possible; the drop-shipper buys or waits for a replacement item and finally ships it) would take too long for the buyer, who expects immediate recompense.

In effect, the seller can't make the order right with the customer without refunding the purchase price in a timely manner. This puts them out-of-pocket for the price of the goods along with the hassle of trying to recover the money from the drop-shipper.

But a simple refund alone sometimes isn't enough for the buyer! No, depending on the amount of perceived hassle and effort this transaction has cost them, they are still likely to rate the transaction negatively overall. (And rightfully so – once it's become evident that a seller is working through a drop-shipper, many of their excuses and delays start to ring very hollow.) So a seller may have, at this point, outlayed a lot of their own time and money to rectify a bad transaction only to still suffer the penalties of a red star.

What option does the seller have left to maintain their positive reputation? You guessed it – a payoff. Not only will a concerned seller eat the price of the goods – and any shipping involved – but they will also pay an additional cash bounty (typically up to $20.00) to get buyers to flip a red star to green.

What is the cost of clearing negative feedback on drop-shipped goods? The cost of the item + $20.00 + lost time in negotiating with the buyer. That's the cost that reputation imposes on drop-shipping on eBay.

The lesson here is that a reputation model will be reinterpreted by users as they find new ways to use your site. Site operators need to keep a wary eye on the specific behavior patterns they see emerging and adapt accordingly. Chapter 10 provides more detail and specific recommendations for prospective reputation modelers.