- The (un)Book (draft!)
- Appendix A - The Reputation Framework
- Appendix B - Related Resources
- Meta
-
This chapter is a real-life case study applying nearly many of the theories and practical advice presented in this book. The lessons learned on this project had a significant impact on our thinking about reputation systems, the power of social media moderation, and the need to publish these results in order to share our findings with the greater web application development community.
In summer 2007, Yahoo! tried to address some moderation challenges with one of its flagship community products: Yahoo! Answers (answers.yahoo.com). The service had fallen victim to its own success and drawn the attention of trolls and spammers in a big way. The Yahoo! Answers team was struggling to keep up with harmful, abusive content that flooded the service, most of which originated with a small number of bad actors on the site.
Ultimately, the answer to these woes was provided by a clever (but simple) system that was rich in reputation: it was designed to identify bad actors, indemnify honest contributors, and take the overwhelming load off of the customer care team. Here's how that system came about.
Yahoo! Answers debuted in December of 2005 and almost immediately enjoyed massive popularity as a community-driven web site and a source of shared knowledge.
Yahoo! Answers provides a very simple interface to do, chiefly, two things: pose questions to a large community (potentially, any active, registered Yahoo! user-that's roughly a half-billion people worldwide); or answer questions that others have asked. Yahoo! Answers was modeled, in part, from similar question-and-answer sites like Korea's Naver.com Knowledge Search.
The appeal of this format was undeniable. By June of 2006, according to Business 2.0, Yahoo! Answers had already become “the second most popular Internet reference site after Wikipedia and had more than 90% of the domestic question-and-answer market share, as measured by comScore.” Its popularity continues and, owing partly to excellent search engine optimization, Yahoo! Answers pages frequently appear very near the top of search results pages on Google and Yahoo! for a wide variety of topics.
Yahoo! Answers is by the most active community site on the Yahoo! network. It logs more than 1.2 million user contributions (questions and answers combined) each day.
Yahoo! Answers is a unique kind of marketplace-one not based on the transfer of goods for monetary reward. No, Yahoo! Answers is a knowledge marketplace, where the currency of exchange is ideas. Furthermore, Yahoo! Answers focuses on a specific kind of knowledge.
Micah Alpern was the user experience lead for early releases of Yahoo! Answers. He refers to the unique focus of Yahoo! Answers as “experiential knowledge” -the exchange of opinions and sharing of common experiences and advice. (See Figure_10-1 ) While verifiable, factual information is indeed exchanged on Yahoo! Answers, a lot of the conversations that take place there are intended to be social in nature.
So, what problems, exactly, was Yahoo! Answers suffering from? Two factors-the timeliness with which Yahoo! Answers displayed new content, and the overwhelming number of contributions it received-had combined to create an unfortunate environment that was almost irresistible to trolls. Dealing with offensive and antagonistic user content had become the number one feature request from the Yahoo! Answers community.
The Yahoo! Answers team first attempted a machine-learning approach, developing a black-box abuse classifier (lovingly named the “Junk Detector” ) to prefilter abuse reports coming in. It was intended to classify the worst of the worst content and put it into a prioritized queue for the attention of customer care agents.
The Junk Detector was mostly a bust. It was moderately successful at detecting obvious spam, but it failed altogether to identify the subtler, more insidious contributions of trolls.
What's the difference between trolling behavior and plain old spam? The distinction is subtle, but understanding it is critical when you're combating either one. We classify as spam communications that are unwanted, make overtly commercial appeals, and are broadcast to a large audience.
Fortunately, the same characteristics that mark a communication as spam also make it stand out. You probably can easily identify spam after just a quick inspection. We can teach these same tricks to machines. Although spammers constantly change their tactics to evade detection, spam generally can be detected by machine methods.
Trollish behavior, however, is another matter altogether. Trolls may not have financial motives-more likely, they crave attention and are motivated by a desire to disrupt the larger conversation in a community. Trolls quickly realize that the best way to accomplish these goals are by nonobvious means. An extremely effective means of trolling, in fact, is to disguise your trollish intentions as real conversation.
Accomplished trolls can be so subtle that even human agents are hard pressed to detect them. In Chap_6-Applying_Scope_to_UK_Msg_Boards we discuss a kind of subtle trolling in a sports context: a troll masquerading as a fan of the opposing team. For these trolls, pretending to be faithful fans is part of the fun-it renders them all the more disruptive when they start to trash-talk the home team.
How do you detect for that? It's hard for a human-and near impossible for a machine-but it's possible with a number of humans. Adding consensus and reputation-enabled methods makes it easier to reliably discern trollish behavior from sincere contributions. Because a reputation system to some degree reflects the tastes of a community, it also has a better-than-average chance at catching behavior that transgresses those tastes.
Engineering manager Ori Zaltzman recalls the exact moment he knew for certain that something had to be done about trolls-when he logged onto Yahoo! Answers to see the following question highlighted on the home page: “What is the best sauce to eat with my fried dead baby?” (And, yes, we apologize for the citation-but it certainly illustrates the distasteful effects of letting trolls go unchallenged in your community.)
That question got through the Junk Detector easily. Even though it's an obviously unwelcome contribution, on the surface-and to a machine-it looked like a perfectly legitimate question: grammatically well formed, no SHOUTING ALL CAPS. So abusive content could sit on the site with impunity for hours before the staff could respond to abuse reports.
Because the currency of Yahoo! Answers is the free exchange of opinions, a critical component of “free” in this context is timely. Yahoo! Answers functions best as a near-real-time communication system, and-as a design principle-erred on the side of timely delivery of users' questions and answers. User contributions are not subject to any type of editorial approval before being pushed to the site.
One particular area of the site became a highly sought-after target for abusers: the high-profile front page of Yahoo! Answers. (See Figure_10-2 .)
Any newly asked question could potentially appear in highly trafficked areas, including the following: * The index of open (answerable) questions (http://answers.yahoo.com/dir/index)
Yahoo! Answers, somewhat famously, already featured a reputation system-a very visible one, designed to encourage and reward ever-greater levels of user participation. On Yahoo! Answers, user activity is rewarded with a detailed point system. (See Chap_7-Points_and_Accumulators .)
At the heart of the debate is this question: does the existence of these points-and the incentive of rewarding people for participation-actually improve the experience of using Yahoo! Answers? Does it make the site a better source of information? Or are the system's game-like elements promoted too heavily, turning what could be a valuable, informative site into a game for the easily distracted?
We're mostly steering clear of that discussion here. (We touch on aspects of it in Chapter_7 .) This case study deals only with combating obviously abusive content, not with judging good content from bad.
The crew fielded to tackle this problem was a combination of two teams.
The Yahoo! Answers product team had ultimate responsibility for the application. It was made up of domain experts on questions and answers: from the rationale behind the service, to the smallest details of user experience, to building the high-volume scalable systems that supported it. These were the folks who best understood the service, and they were held accountable for preserving the integrity of the user experience. Ori Zaltzman was the engineering manager, Quy Le was product manager, Anirudh Koul was the engineer leading the troll hunt and optimizing the model, and Micah Alpern was the lead user experience designer.
The members of the product team were the primary customers for the technology and advice of another team at Yahoo!, the reputation platform team. The reputation platform was a tier of technology (detailed in Appendix_A ) that was the basis for many of the concepts and models we're discussing in this book (this book is largely documentation of that experience). Yvonne French was the product manager for the reputation platform; Building Web 2.0 Reputation Systems coauthor Randy Farmer was the platform's primary designer and advised on reputation model and system deployment. A small engineering team built the platform and implemented the reputation models.
As you'll recall from Chapter_5 , we recommend starting any reputation system project by asking these fundamental questions:
As is often the case on community-driven web sites, what is good for the community-good content and the freedom to have meaningful, interruption-free exchanges-also just happens to make for good business value for the site owners. This project was no different, but it's worth discussing the project's specific goals.
The first motivation for cleaning up abuse on Yahoo! Answers was cost. The existing system for dealing with abuse was expensive, relying as it did on heavy human-operator intervention. Each and every report of abuse had to be verified by a human operator before action could be taken on it.
Randy Farmer, at the time the community strategy analyst for Yahoo!, pointed out the financial foolhardiness of continuing down the path where the system was leading: “the cost of generating abuse is zero, while we're spending a million dollars a year on customer care to combat it-and it isn't even working.” Any new system would have to fight abuse at a cost that was orders of magnitude lower than that of the manual-intervention approach.
The monetary cost of dealing with abuse on Yahoo! Answers was considerable, but the community cost of not dealing with it would have been far higher: bad behavior begets bad behavior, and leaving obviously abusive content in high-profile locations on the site would, over time, absolutely erode the perceived value of social interactions on Yahoo! Answers. (See Chap_8-Broken_Windows_Theory .)
Of course, Yahoo! hoped that the inverse would also prove true: that if Yahoo! Answers addressed the problem forcefully and with great vigor, the community would notice the effort and respond in kind. (See Chap_9-The_Hawthorne_Effect .)
The goals for content quality were twofold: reduce the overall amount of abusive content on the site, and reduce the amount of time it took for content reported as abusive to be pulled down.
In Chapter_5 we proposed a number of content control patterns as useful models for thinking about the ways in which your content is created, disseminated, and moderated. Let's revisit those patterns briefly for this project.
Before the community content moderation project, Yahoo! Answers fit nicely in the basic social media pattern. (See Chap_5-Basic-Social-Media-CCP .) While users were given responsibility for creating and editing (voting for, or reporting as abusive) questions and answers, final determination for removing content was left up to the staff.
The team's goal was to move Yahoo! Answers closer to The Full Monty (Chap_5-The-Full-Monty-CCP ) and put the responsibility for removing or hiding content right into the hands of the community. That responsibility would be mediated by the reputation system, but staff intervention in content quality issues would only be necessary in cases where content contributors appealed the systems' decisions.
In Chap_5-Incentives , we discussed some ways to think about the incentives that could drive community participation on your site. For Yahoo! Answers, the team decided to devise incentives that took into account a couple of primary motivations: * Some community members would report abuse for altruistic reasons: out of a desire to keep the community clean. (See Chap_5-Altruistic_Incentives .) Downplaying the contributions of such users would be critical: the more public their deeds became, the less likely they would continue acting out of sheer altruism.
The team devised this plan for the new model: a reputation model would sit between the two existing systems-a report mechanism that permitted any user on Yahoo! Answers to flag any other user's contribution, and the (human) customer care system that acted on those reports. (See Figure_10-3 .)
This approach was based on two insights:
The team would accomplish item 1, removing customer care from the loop, by implementing a new way to remove content from the site-“hiding.” Hiding involved trusting the community members themselves to vote to hide the abusive content. The reputation platform would manage the details of the voting mechanism and any related karma. Because this design required no external authority to remove abusive content from view, it was probably the fastest possible way to cut display time for abusive content.
As for item 2, dealing with exceptions, the team devised an ingenious mechanism-an appeals process. In the new system, when the community voted to hide a user's content, the system sent the author an email explaining why, with an invitation to appeal the decision. Customer care would get involved only if the user appealed. The team predicted that this process would limit abuse of the ability to hide content; it would provide an opportunity to inform users about how to use the feature; and, because trolls often don't give valid email addresses when registering an account, they would simply be unable to appeal-they'd never receive the notices.
Most of the rest of this chapter details the reputation model designated by just the Hide Content? diamond in Figure_10-3 . See the patent application for more details about the other (nonreputation) portions of the diagram, such as the Notify Author and Appeals process boxes.
Yahoo! has applied for a patent on this reputation model, and that application has been published: Trust Based Moderation-Inventors: Ori Zaltzman and Quy Dinh Le. Please consider the patent if you are even thinking about copying this design.
The authors are grateful to both the Yahoo! Answers and the reputation product teams for sharing their design insights and their continued assistance in preparing this case study.
Yahoo! Answers was already a well-established service at the time that the community content moderation model was being designed, with all of the objects and most of the available inputs already well defined. The final model includes dozens of inputs to more than a dozen processes. Out of respect for intellectual property and the need for brevity, we have not detailed every object and input here. But, thanks to the Yahoo! Answers team's willingness to share, we're able to provide an accurate overall picture of the reputation system and its application.
Here are the objects of interest for designing a community-powered content moderation system.
Users may also mark a question with a star, indicating that the question is a favorite. Each of these rating schemes already existed at the time the community content moderation system was designed, so for each scheme, the inputs and the outputs were both available for the designers' consideration.
Developing this model required considering at least two different classifications of users: authors and reporters.
Customer care agents also have a reputation-for accuracy-though it isn't calculated by this model. At the start of the Yahoo! Answers community content moderation project, the accuracy of a customer care agent's evaluation of questions was about 90%. That rate meant that 1 in 10 submissions was either incorrectly deleted or incorrectly allowed to remain on the site. An important measure of the model's effectiveness was whether users' evaluations were more accurate than the staff's. The design included two documents that are worthy of note, though they were not formal objects (that is, they neither provided input nor were reputable entities). The Yahoo! terms of service and the Yahoo! Answers community guidelines (Figure_10-4 ) are the written standards for questions and answers. Users are supposed to apply these rules in evaluating content.
When a reputation model is introduced, users often are confused at first about what the reputation score means. The design of the community content moderation model for Yahoo! Answers is only intended to identify abusive content, not abusive users. Remember that many reasons exist for removing content, and some content items are removed as a result of behaviors that authors are willing to change, if gently instructed to do so.
The inclusion of an appeals process in the application not only provides a way to catch false-positive classification by reporters; it also gives Yahoo! a chance to inform authors of the requirements for participating in Yahoo! Answers - allowing for the user to learn more about expected behavior.
Ideally, in designing a reputation system, you'd start with as comprehensive a list of potential inputs as possible. In practice, when the Yahoo! Answers team was designing the community content moderation model, they used a more incremental approach. As the model evolved, the designers added more subtle objects and inputs. Below, to illustrate an actual model development process, we'll roughly follow the historical path of the Yahoo! Answers design.
When you develop a reputation model, it's good practice to start simple: focus only on the main objects, inputs, decisions, and uses. Assume a universe in which the model works exactly as intended. Don't focus too much on performance or abuse at first-you'll get to those issues in later iterations. Trying to solve this kind of complex equation in all dimensions simultaneously will just lead to confusion and impede your progress.
For the Yahoo! Answers community content moderation system, the designers started with a very basic model-abuse reports would accumulate against a content item, and when some threshold was reached, the item would be hidden. This model, sometimes called “X-strikes-and-you're-out,” is quite common in social web applications. Craigslist is a well-known example.
Despite the apparent complexity of the final application, the model's simple core design remained unchanged: accumulated abuse reports automatically hide content. Having that core design to keep in mind as the key goal in mind helped eliminate complications in the design.
Inputs
From the beginning, the team planned for the primary input to the model to be a user-generated abuse report explicitly about a content item (a question or an answer). This user interface device was the same one already in place for alerting customer care to abuse. Though many other inputs were possible, initially the team considered a model with abuse reports as the only input.
The abuse report was the only input in the first iteration of the model.
Mechanism and Diagram
At the core of the model was a simple, binary decision: should a content item that has just been reported as abusive be hidden? How does the model make the decision, and, if the result is positive, how should the application be notified?
In the first iteration, the model for this decision was “three strikes and you're out.” (See Figure_10-5 .) Abuse reports fed into a simple accumulator (Chap_3-Simple_Accumulator ). Each report about a content item was given equal weight; all reports were added together and stored as ContentItemAbuse
. That score was sent on to a simple evaluator, which tested it against a threshold (3) and either terminated it (if the threshold had not been reached) or alerted the application to hide the item.
Given that performance was a key requirement for this model, the abuse reports were delivered asynchronously, and the outgoing alert to the application used an application-level messaging system.
This iteration of the model did not include karma.
Analysis
This very simple model didn't really meet the minimum requirement for the application-the fastest possible removal of abusive content. Three strikes is often too many, but one or two is sometimes too few, giving too much power to bad actors.
The model's main weakness was to give every abuse report equal weight. By giving trusted users more power to hide content and giving unknown users or bad actors less power, the model could improve the speed and accuracy with which abusive content was removed.
The next iteration of the model introduced karma for reporters of abuse.
Ideally, the more abuse a user reports accurately, the greater the trust the system should place in that user's reports. In the second iteration of the model, shown in Figure_10-6 , when a trusted reporter flagged an item, it was hidden immediately. Trusted reporters had proven, over time, that their motivations were pure, their comprehension of community standards was good, and their word could be taken at face value.
Reports by users who had never previously reported an item, with unknown reputation, were all given equal weight, but it was significantly lower than reports by users with a positive history. In this model, individual unknown reporters had less influence on any one content item, but the votes of different individuals could accrue quickly. (At the same time, the individuals accrued their own reporting histories, so unknown reporters didn't stay unknown for long.)
Though you might think that “bad” reporters (those whose reports were later overturned on appeal) should have less say than unknown users, the model gave equal weight to reports from bad reporters and unknown reporters. See Chap_6-Negative_Public_Karma .
Inputs
To the inputs from the previous iteration, the designers added three events related to flagging questions and answers accurately.
Appeal Result: Upheld
to the reputation model.Appeal Result: Overturned
to the reputation model for corrective adjustments.Mechanism and Diagram
The designers transformed the overly simple “strikes” -based model to account for a user's abuse report history.
Goals: Decrease the time required to hide abusive content. Reduce the risk of inexperienced or bad actors hiding content inappropriately.
Solution: Add AbuseReporter
karma to record the user's accuracy in hiding abusive content. Use AbuseReporter
to give greater weight to reports by users with a history of accurate abuse reporting.
To accommodate the varying weight of abuse reports, the designers changed the calculation of ContentItemAbuse
from strikes to a normalized value, where 0.0 represented no abuse information known and 1.0 represented the maximum abuse value. The evaluator now compared the ContentItemAbuse
score to a normalized value representing the certainty required before hiding an item.
The designers added a new process to the model, “update report karma,” which maintained the AbuseReporter
reputation claim, a normalized value, where 0.0 represented a user with no history of abuse reporting and 1.0 represented a user with a completely accurate abuse reporting history. A user with a perfect score of 1.0 could hide any item immediately.
The inputs that increased AbuseReporter
were Item Hidden
and Appeal Result: Upheld
. The input Abuse Result: Overturned
had a disproportionately large negative effect on AbuseReporter
, providing an incentive for reporters not to use their power indiscriminately.
Unlike the first process, the new “update item abuse” process did not treat each input the same way. It read the reporter's AbuseReporter
karma, added a small constant to ContentItemAbuse
(so that users with no karma made at least a small contribution to the result), and capped the result at the maximum. If the result was 1.0, the system hid the item but, in addition to alerting the application, it also sent an “item hidden” message for each user who flagged the item as hidden. This message represented community consensus and, since the vast majority of hidden items would never be reviewed by customer care, was often the only opportunity the system had to reinforce the karma of those users. Very few appeals were anticipated given that trolls were known to give bogus email addresses when registering (and therefore can never appeal). The incentives for both the legitimate authors and good abuse reporters discourage abusing the community moderation model.
The system sent “appeal results” messages asynchronously as part of the customer care application; the messages could come in at any time. After AbuseReporter
was adjusted, the system did not attempt to update other ContentItemAbuse
scores the reporter may have contributed to.
Analysis
The second iteration of the model did exactly what it was supposed to do: it allowed trusted reporters to hide abusive content immediately. However, it ignored the value of contributions by authors who might themselves be established, trusted members of the community. As a result, a single mistaken abuse report against a top contributor led to a higher appeal rate, which not only increased costs but generated bad feelings about the site. Furthermore, even before the first iteration of the model had been implemented, trolls already had been using the abuse reporting mechanism to harass top contributors. So in the second iteration, treating all authors equally allowed malicious users (trolls or even just rivals of top contributors) to take down the content of top contributors with just a few puppet accounts.
The designers found that the model needed to account for the understanding that in cases of alleged abuse, some authors always deserve a second opinion. In addition, the designers knew that to hide content posted by casual regular users, the ContentItemAbuse
score required by the model should be lower-and for content by unknown authors, lower still.
In other words, the model needed karma for author contributions.
The third iteration of the model introduced QuestionAuthor
karma and AnswerAuthor
karma, which reflected the quality and quantity of author contributions. The system compared ContentItemAbuse
to those two reputations instead of a constant. This change raised the threshold for hiding content for active, trusted authors and lowered the threshold for unknown authors and authors known to have contributed abusive content.
Inputs
The new inputs to the model fell into two groups: inputs that indicated the quantity and community reputation of the questions and answers contributed by an author, and evidence of any previous abusive contributions.
QuestionQuality
reputation score increased:QuestionQuality
reputation score to 0.0 and adjusted the author's karma appropriately.Another negative input was the Junk Detector score, which acted as an initial guess about the level of abusive content in the question. Note that a high Junk Detector score would have prevented the question from ever being displayed at all.
QuestionsAskedCount
). This configuration allowed new contributors to start with a reputation score based on the average quality of all previous contributions to the site, by all authors (AuthorAverageQuestionQuality)
.
When other users answered the question, the question itself inherited the AverageAnswererQuality
reputation score for all users who answered it. (If a lot of good people answer your question, it must be a good question.)
AnswerQuality
reputation score increased:AnswerQuality
reputation score of answers that fell below this display threshold. This choked off further negative ratings simply becuase the item was not longer displayed to most users.
When customer care staff deleted an answer, the system reset the AnswerQuality
reputation to 0.0 and adjusted the author's karma appropriately.
Another negative input was the Junk Detector rating, which acted as a rough guess at the level of abusive content in the answer. Note that if the Junk Detector rating was high, the system would already have hidden the answer before even sending it through the reputation process.
QuestionsAnsweredCount
). In that configuration, each time an author posted a new answer, the system assigned a starting reputation based on the average quality of all answers previously submitted by that author (AuthorAverageAnswerQuality)
.AbusiveContent
karma). All previously hidden questions or answers had a negative effect on all contributor karmas.Mechanism and Diagram
In the third iteration of the model, the designers created several new reputation scores for questions and answers and a new user role with a karma-that of author of the flagged content. Those additions more than doubled the complexity compared to the previous iteration, as illustrated in Figure_10-7 . But if you consider each iteration as a separate reputation model (which is logical because each addition stands alone), each one is simple. By integrating separable small models, the combination made up a full-blown reputation system. For example, the karmas introduced by the new models-QuestionAuthor
karma, QuestionAnswer
karma, and AbusiveContent
karma-could find uses in contexts other than hiding abusive content.
In this iteration the designers added two new main karma tracks, represented by the parallel messaging tracks for question karma and answer karma. The calculations are so similar that we'll present the description only once, using “item” to represent either answer or question.
The system gave each item a quality reputation [QuestionQuality | AnswerQuality
], which started as the average of the quality reputations of the previously contributed items [AuthorAverageQuestionQuality | AuthorAverageAnswerQuality
] and a bit of the Junk Detector score. As either positive (stars, ratings, shares) or negative inputs (items hidden by customer care staff) changed the scores, the averages and karmas in turn were immediately affected. Each positive input was restricted by weights and limits; for example, only the first 10 users marking an item as a favorite were considered, and each could contribute a maximum of 0.5 to the final quality score. This meant that increasing the item quality reputation required many different types of positive inputs.
Once the system had assigned a new quality score to an item and then calculated and stored the item's overall average quality score, it sent the process a message with the average score to calculate the individual item's quality karma [QuestionAuthor | AnswerAuthor
], subtracting the user's overall AbusiveContent
karma to generate the final result.
The system then combined the QuestionAuthor
and AnswerAuthor
karmas into ContentAuthor
karma, using the best (the larger) of the two values. That approach reflected the insight of Yahoo! Answers staff that people who ask good questions are not the same as people who give good answers.
The designers once again changed the “Hide Content?” process, now comparing ContentItemAbuse
to the new ContentAuthor
karma to determine whether the content should be hidden. When an item was hidden, that information was sent as an input into a new process that updated the AbusiveContent
karma.
The new process for updating AbusiveContent
karma also incorporated the inputs from customer care staff that were included in iteration 2-appeal results and content removals-which affected the karma either positively or negatively, as appropriate. Whenever an input entered that process, the system sent a message with the updated score to each of the processes for updating question and answer karma.
Analysis
By adding positive and negative karma scores for authors and effectively requiring a second or third opinion before hiding their content, the designers added protection for established, trusted authors. It also shortened the amount of time that bad content from historically abusive users would appear on the site by allowing single-strike hiding by only lightly experienced abuse reporters. The team was very close to finished.
But it still had a cold-start problem. How could the model protect authors who weren't abusive but didn't have a strong history of posting contributions or reporting abuse? They were still too vulnerable to flagging by other users-especially inexperienced or malicious reporters.
The team needed as much outside information as it could get its hands on to provide some protection to new users who deserved it and to expose malicious users from the start.
The team could have stopped here, but it wanted the system to be as effective as possible as soon as it was deployed. Even before abuse reporters can build up a history of accurately reporting abuse, the team wanted to give the best users a leg up over trolls and spammers, who almost always create accounts solely for the purpose of manipulating content for profit or malice.
In other words, the team wanted to magnify any reasons for trusting or being suspicious of a user from the very beginning, before the user started to develop a history with the reputation system.
To that end, the designers added a model of inferred karma (see Chap_6-Inferred_Karma ).
Fortunately, Yahoo! Answers had access to a wealth of data-inferred karma inputs-about users from other contexts.
Inputs
Many of the inferred inputs came from Yahoo! site security features. To maintain that security, some of the inputs have been omitted, and the descriptions of others have been altered to protect proprietary features.
Mechanism and Diagram
In the final iteration of the model, the designers implemented this simple idea: until the user had a detailed history in the reputation model, use a TrustBootstrap
reputation as reasonably trustworthy placeholder. As the number of a user's abuse reports increased, the share of TrustBootstrap
used in calculating the user's reporter and author karmas was decreased. Over time, the user's bootstrap reputation faded in significance until it became computationally irrelevant.
The scores for AbusiveContent
karma and AbuseReporter
karma now took the various inferred karma inputs into account.
AbusiveContent
karma was calculated by mixing what we knew about a user's karma reporting history (ConfirmedRerporterKarma
) with what could be inferred about the user's behavior from other inputs (TrustBootstrap
).
TrustBootstrap
was itself made up of three other new reputations: SuspectedAbuser
karma, which reflected any evidence of abusive behavior; CommunityInvestment
karma, which represented the user's contributions to Yahoo! Answers and other communities; and AbusiveContent
karma, which held an author's record of submitting abusive content.
There were risks in getting the constants wrong-too much power too early could lead to abuse. Depending on the bootstrap too long could lead to distrust when reporters don't see the effects of their reputation quickly enough.
We detail each new process in Figure_10-8 below.
SuspectedAbuser
karma using those values and the history of previous values for the user. Then it sent the value in a message to the “generate abuse reporter bootstrap” process.
This process generated CommunityInvestment
karma by accounting for the longevity of the user's participation in Yahoo! Answers and the age of the user's Yahoo! account, along with a simple participation value calculation (the user's level) and an approximation of answer quality-the best answer percentage. Each time this value was changed, the system sent the new value to the “generate abuse reporter bootstrap” process.
AbusiveContent
karma, it now sent an additional message to the “generate abuse reporter bootstrap” process.TrustBootstrap
reputation represented the system's best guess at the reputation of users without a long history of transactions with the service. It was a weighted mixer process, taking positive input from CommunityInvestment
karma and weighing that against two negative scores: the weaker score was the connection-based SuspectedAbuser
karma, and the stronger score was the user-history-based AbusiveContent
karma. Even though a high value for the TrustBootstrap
reputation implied a high level of certainty that a user would violate the rules, AbusiveContent
karma made up only a share of the bootstrap and not all of it. The reason was that the context for the score was content quality, and the context of the bootstrap was reporter reliability: someone who is great at evaluating content might suck at creating it. Each time the bootstrap process was updated, it was passed along to the final process in the model: “update abuse reporter karma.” ConfirmedRerporter
karma to reflect the accuracy of the user's abuse reports. The only difference was that the system now sent a message for each reporter to the “update abuse reporter karma” process, where the claim value was incorporated into the bootstrap reputation.AbuseReporter
karma, which was used to weight the value of a user's abuse reports. To determine the value, it combined TrustBootstrap
inferred karma with a verified abuse report accuracy rate as represented by ConfirmedRerporter
. As a user reported more items, the share of TrustBootstrap
in the calculation decreased. Eventually, AbuseReporter
karma became equal to ConfirmedRerporter
karma. Once the calculations were complete, the reputation statement was updated and the model was terminated.Analysis
With the final iteration, the designers had incorporated all the desired features, giving historically trusted users the power to hide spam and troll-generated content almost instantly while preventing abusive users from hiding content posted by legitimate users. This model was projected reduced the load on customer care by at least 90% and maybe even as much as 99%. There was little doubt that the worst content would be removed from the site significantly faster than the typical 12+ hour response time. How much faster was difficult to estimate.
In a system with over a dozen processes, more than 20 unproven formulas, and about 50 best-guess constant values, a lot could go wrong. But iteration provided a roadmap for implementation and testing. The team started with one model, developed test data and testing suites for it, made sure it worked as planned, and then built outward from there-one iteration at a time.
The Yahoo! Answers example provides clear answers to many of the questions raised in Chapter_7 , where we discussed the visible display of reputation.
All interested parties (content authors, abuse reporters, and other users) certainly could see the effects of the reputations generated by the system at work: content was hidden or reappeared; appeals and their results generated email notifications. But the designers made no attempt to roll up the reputations and display them back to the community. The reputations definitely were not public reputations.
In fact, even showing the reputations only to the interested parties as personal reputations likely would only have given actual malfeasants more information about how to assault the system. These reputations were best reserved for use as corporate reputations only.
The Yahoo! Answers system used the reputation information that it gathered for one purpose only: to make a decision-hide or show content. Some of the other purposes discussed in Chap_7-Modify_Sites_Output do not apply to this example. Yahoo! Answers already used other, application-specific methods for ordering and promoting content, and the community content moderation system was not intended to interfere with those aspects of the application.
This question has a simple answer, with a somewhat more complicated clarification. As we mentioned above in Chap_10-Limiting_Scope , the ultimate target for reputations in this system is content: questions and answers.
It just so happened that in targeting those objects, the model resulted in generation of a number of proven and assumed reputations that pertained to people: the authors of the content in question, and the reporters who flagged it. But judging the character the users of Yahoo! Answers was not the purpose of the moderation system, and the data on those users should never be extended in that way without careful deliberation and design.
In Chapter_8 we detailed three main uses for reputation (other than displaying scores directly to users). We only half-jokingly referred to them as the good, the bad, and the ugly. Since the Yahoo! Answers community content moderation model says nothing about the quality of the content itself-only about the users who generate and interact with it-it can't really rank content from best to worst. These first two use categories-the good and the bad-don't apply to the moderation model - it deals with identifying the ugly.
The Yahoo! Answers system dealt exclusively with the last category-the ugly-by allowing users to rid the site of content that violated the terms of service or the community guidelines.
The primary result of this system was to hide content as rapidly as possible so that customer support staff could focus on the exceptions (borderline cases and bad calls). After all, at the start of the project, even customer care staff had an error rate as high as 10%.
This single use of the model, if effective, would save the company over $1 million in customer care costs per year. That savings alone made the investment profitable in the first few months after deployment, so any additional uses for the other reputations in the model would be an added bonus.
For example, when a user was confirmed as a content abuser, with a high value for AbusiveContent
karma, Yahoo! Answers could share that information with the Yahoo! systems that maintained the trustworthiness of IP addresses and browser cookies, raising the SuspectedAbuser
karma score for that user's IP address and browser. That exchange of data made it harder for a spammer or a troll to create a new account. Users who are technically sophisticated can circumvent such measures, but the measures have been very effective against users who aren't-and who make up the vast majority of Yahoo! users.
When customer care agents reviewed appeals, the system displayed ConfirmedReporter
karma for each abuse reporter, which acted as a set of confidence values. An agent could see that several reports from low-karma users were less reliable than one or two reports from abuse reporters with higher karma scores. A large enough army of sock puppets, with no reputation to lose, could still get a nonabusive item hidden, even if only briefly.
The approach to rolling out a new reputation-enabled application detailed in Chapter_9 is derived from the one used to deploy all reputation systems at Yahoo!, including the community content moderation system. No matter how many times reputation models had been successfully integrated into applications, the product teams were always nervous about the possible effects of such sweeping changes on their communities, product, and ultimately the bottom line. Given the size of the Yahoo! Answers community, and earlier interactions with community members, the team was even more cautious than most others at Yahoo!. Whereas we've previously warned about the danger of over-compressing the integration, testing, and tuning stages to meet a tight deadline, the product team didn't have that problem. Quite the reverse-they spent more time in testing than was required, which created some challenges with interpreting reputation testing results, which we will cover in detail.
The full model as shown in Figure_10-8 has dozens of possible inputs, and many different programmers managed the different sections of the application. The designers had to perform a comprehensive review of all of the pages to determine where the new “Report Abuse” buttons should appear. More important, the application had to account for a new internal database status-“hidden” -for every question and answer on every page that displayed content. Hiding an item had important side effects on the application: it had to adjust total counts, revoke points granted, and a policy had to be devised and followed on handling any answers (and associated points) attached to any hidden questions.
Integrating the new model required entirely new flows on the site for reporting abuse and handling appeals. The appeals part of the model required that the application send email to users, functionality previously reserved for opt-in watch lists and marketing-related mailings; appeals mailings were neither. Last, the customer care management application would need to be altered.
Application integration was a very large task that would have to take place in parallel with the testing of the reputation model. Reputation inputs and outputs would need to be completed or at least simulated early on. Some project tasks didn't generate reputation input and therefore didn't conflict with testing-for example, functions in the new abuse reporting flows such as informing users about how new system worked and screens confirming receipt of an abuse report.
Just as the design was iterative, so too were the implementation and testing. In Chap_9-Testing , we suggested building and testing a model in pieces. The Yahoo! Answers team did just that, using constant values for the missing processes and inputs. The most important thing to get working was the basic input flow: when a user clicked “Report Abuse,” that action was tested against a threshold (initially a constant), and when it was exceeded, the reputation system sent a message back to the application to hide the item - effectively removing it from the site.
Once the basic input flow had been stabilized, the engineers added other features and connected additional inputs.
The engineers bench-tested the model by inserting a logical test probe into the existing abuse reporting flow and using those reports to feed the reputation system, which they ran in parallel. The system wouldn't take any action that users would see just yet, but the model would be put through its paces as each change was made to the application.
But the iterative bench-testing approach had a weakness that the team didn't understand clearly until much later: The output of the reputation process-the hiding of content posted by other users-had a huge and critical influence on the effectiveness of the model. The rapid disappearance of content items changed the site completely, so real-time abuse reporting data from the current application turned out to be nearly useless for drawing conclusions about the behavior of the model.
In the existing application, several users would click on an abusive question in the first few minutes after it appeared on the home page. But once the reputation system was working, few, if any, users would ever even see the item before it was hidden. The shape of inputs to the system was radically altered by the system's very operation.
Still unaware that the source of abuse reports was inappropriate, the team inferred from early calculations that the reputation system would be significantly faster and at least as accurate as customer care staff had been to date. It became clear that the nature of the application precluded any significant tuning before release-so release required a significant leap of faith. The code was solid, the performance was good, and the web side of the application was finally ready-but the keys to the kingdom were about to be turned over to the users.
The model was turned on provisionally, but every single abuse report was still sent on to customer care staff to be reviewed, just in case.
I couldn't sleep the first few nights. I was so afraid that I would come in the next morning to find all of the questions and answers gone, hidden by rogue users! It was like giving the readers of The New York Times the power to delete news stories.
Ori watched the numbers closely and made numerous adjustments to the various weights in the model. Inputs were added, revised, even eliminated.
For example, the model registered the act of starring (marking an item as a favorite) as a positive indicator of content quality. Seems natural, no? It turned out that a high correlation existed between an item being starred by a user and that same item eventually being hidden. Digging further, Ori found that many reporters of hidden items also starred an item soon before or after reporting it as abuse! Reporters were using the favorites feature to track when an item that they reported was hidden-they were abusing the favorites feature. As a result, starring was removed from the model.
At this time, the folly of evaluating the effectiveness of the model during the testing phase became clear. The results were striking and obvious. Users were much more effective than customer care staff at identifying inappropriate content-not only were they faster, they were more accurate! Having customer care double-check every report was actually decreasing the accuracy rate - they were introducing error by reversing user reports inappropriately.
Users definitely were hiding the worst of the worst content. All the content that violated the terms of service was getting hidden (along with quite a bit of the backlog of older items). But not all the content that violated the community guidelines was getting reported. It seemed that users weren't reporting items that might be considered borderline violations or disputable. For example, answers with no content related to the question, such as chatty messages or jokes, were not being reported. No matter how Ori tweaked the model, that didn't change.
In hindsight, the situation was easy to understand. The reputation model penalized disputes (in the form of appeals): if a user hid an item but the decision was overturned on appeal, the user would lose more reputation than he'd gained by hiding the item. That was the correct design, but it had the side effect of nurturing risk avoidance in abuse reporters. Another lesson in the difference between the bad (low-quality content) and the ugly (content that violates the rules)-they each require different tools to mitigate.
The final phase of testing and tuning of the Yahoo! Answers community content moderation system was itself a partial deployment-all abuse reports were temporarily verified post-reputation by customer care agents. Full deployment consisted mostly of shutting off the customer care verification feed and completing the few missing pieces of the appeals system. This was all completed within a few weeks of the initial beta-test release.
While the beta-test results were positive, in full deployment the system exceeded all expectations.
Note: We've omitted the technical performance metrics here. Without meeting those, the system would never have left the testing phase.
Metric | Baseline | Goal | Result | Improvement |
Average time before reported content is removed |
18 hours |
1 hour |
30 seconds |
120 times the goal >2000 times the baseline |
Abuse report evaluation error rate |
10% |
10% |
<0.1% (appeal result: overturned) |
100x the goal or baseline |
Customer care costs |
100% $1 million per year |
10% $1 million per year |
<0.1% <$1 million per year |
10 times the goal 100 times the baseline Saved >$990,000 per year |
That phenomenon was perhaps best illustrated by another unexpected result about a month after the full system was deployed: both the number of abuse reports and requests for appeal dropped drastically over a few weeks. At first the team wondered if something was broken-but it didn't appear so, since a recent quality audit of the service showed that overall quality was still on the rise. User abuse reports resulted in hiding hundreds of items each day, but the total appeals dropped to a single-digit number, usually just 1 or 2, per day. What had happened?
The trolls and many spammers left.
They simply gave up and moved on.
The broken windows theory (see Chap_8-Broken_Windows_Theory ) clearly applied in this context-trolls found that the questions and answers they placed on the service were removed by vigilant reporters faster than they could create the content. Just as graffiti artists in New York stopped vandalizing trains because no one saw their handiwork, the Yahoo! Answers trolls either reformed or moved on to some other social media neighborhood to find their jollies.
Another important characteristic of the design was that, except for a small amount of localized text, the model was not language dependent. The product team was able to deploy the moderation system to dozens of countries in only a few months, with similar results.
Reputation models fundamentally change the applications that they're integrated into. You might think of them as coevolving with the needs and community of your site. They may drive some users away. Often, that is exactly what you want.
This system required major adjustments to the Yahoo! Answers operational model, including the following.
There was little doubt that driving spammers and trolls from the site had a significantly positive effect on the community at large. Again, abuse reporters became very protective of their reputations so that they could instantly take down abusive content. But it took users some time to understand the new model and adapt their behavior. Below are a few best practices for facilitating the transformation from a company-moderated site to full user moderation.
Best Practices for Switching to Full User Moderation
In the case of Yahoo! Answers, content must obey two different sets of rules: the terms of service and the community guidelines. Clearly describing each category and teaching the community what is (and isn't) reportable is critical to getting users to succeed as reporters as well as content creators. (Figure_10-9 )
Abuse reporter reputation was not displayed. Reporters didn't even know their own reputation score. But active users knew the effects of having a good abuse reporter reputation-most content that they reported was hidden instantly. What they didn't understand was what specific actions would increase or decrease it. As shown in Figure_10-10 , the Yahoo! Answers site clearly explained that the site rewarded accuracy of reports, not volume. That was an important distinction because Yahoo! Answers points (and levels) were based mostly on participation karma-where doing more things gets you more karma. Active users understood that relationship. The new abuse reporter karma didn't work that way. In fact, reporting abuse was one of the few actions the user could take on the site that didn't generate Yahoo! Answers points.
We've arrived at the end of the Yahoo! Answers tale and the end of Designing Web Reputation Systems. With this case study and with this book we've tried to paint as complete and real-world a picture as possible of the process of designing, architecting, and implementing a reputation system.
We've covered the real and practical questions that you're likely to face as you add reputation-enhanced decision making to your own product. We've shown you a graphical grammar for representing entities and reputation processes in your own models. Our hope is that you now have a whole new way to think about reputation on the Web.
We encourage you to continue the conversation with us at the book's companion web site. Come join us at buildingreputation.com.