« September 2009 | Main | November 2009 »

October 28, 2009

Ebay's Merchant Feedback System

Reputation Wednesday is an ongoing series of essays about reputation-related matters. This week, we explore, to some depth, one of the Web's longest-running and highest-profile reputation systems. (We also test-drive our new Google-maps powered zoomable diagrams. Wheee!)

EBay contains the Internet's most well-known and studied user reputation or karma system: seller feedback. Its reputation model, like most others that are several years old, is complex and continuously adapting to new business goals, changing regulations, improved understanding of customer needs, and the never-ending need to combat reputation manipulation through abuse.

Rather than detail the entire feedback karma model here, we'll focus on claims that are from the buyer and about the seller. An important note about eBay feedback is that buyer claims exist in a specific context: a market transaction-a successful bid at auction for an item listed by a seller. This specificity leads to a generally higher quality-karma score for sellers than they would get if anyone could just walk up and rate a seller without even demonstrating that they'd ever done business with them; see Chapter 1- Implicit Reputation.

The scrolling/zooming diagram below shows how buyers influence a seller's karma scores on eBay. Though the specifics are unique to eBay, the pattern is common to many karma systems. For an explanation of the graphical conventions used, see Chapter 2.

The reputation model in this figure was derived from the following eBay pages: http://pages.ebay.com/help/feedback/scores-reputation.html and http://pages.ebay.com/services/buyandsell/welcome.html, both current as of July 2009.

We have simplified the model for illustration, specifically by omitting the processing for the requirement that only buyer feedback and Detailed Seller Ratings (DSR) provided over the previous 12 months are considered when calculating the positive feedback ratio, DSR community averages, and–by extension–power seller status. Also, eBay reports user feedback counters for the last month and quarter, which we are omitting here for the sake of clarity. Abuse mitigation features, which are not publicly available, are also excluded.

This diagram illustrates the seller feedback karma reputation model, which is made out of typical model components: two compound buyer input claims-seller feedback and detailed seller ratings-and several roll-ups of the seller's karma: community feedback ratings (a counter), feedback level (a named level), positive feedback percentage (a ratio), and the power seller rating (a label).

The context for the buyer's claims is a transaction identifier-the buyer may not leave any feedback before successfully placing a winning bid on an item listed by the seller in the auction market. Presumably, the feedback primarily describes the quality and delivery of the goods purchased. A buyer may provide two different sets of complex claims, and the limits on each vary:

  • 1. Typically, when a buyer wins an auction, the delivery phase of the transaction starts and the seller is motivated to deliver the goods of the quality advertised in a timely manner. After either a timer expires or the goods have been delivered, the buyer is encouraged to leave feedback on the seller, a compound claim in the form of a three-level rating-positive, neutral, or negative-and a short text-only comment about the seller and/or transaction. The ratings make up the main component of seller feedback karma.
  • 2. Once each week in which a buyer completes a transaction with a seller, the buyer may leave detailed seller ratings, a compound claim of four separate 5-star ratings in these categories: item as described,communications,shipping time,and shipping and handling charges.The only use of these ratings, other than aggregation for community averages, is to qualify the seller as a power seller.

EBay displays an extensive set of karma scores for sellers: the amount of time the seller has been a member of eBay; color-coded stars; percentages that indicate positive feedback; more than a dozen statistics track past transactions; and lists of testimonial comments from past buyers or sellers. This is just a partial list of the seller reputations that eBay puts on display.

The full list of displayed reputations almost serves as a menu of reputation types present in the model. Every process box represents a claim displayed as a public reputation to everyone, so to provide a complete picture of eBay seller reputation, we'll simply detail each output claim separately:

  • 3. The feedback score counts every positive rating given by a buyer as part of seller feedback, a compound claim associated with a single transaction. This number is cumulative for the lifetime of the account, and it generally loses its value over time-buyers tend to notice it only if it has a low value.

It is fairly common for a buyer to change this score, within some time limitations, so this effect must be reversible. Sellers spend a lot of time and effort working to change negative and neutral ratings to positive ratings to gain or to avoid losing a power seller rating. When this score changes, it is then used to calculate the feedback level.

  • 4. The feedback level claim is a graphical representation (in colored stars) of the feedback score. This process is usually a simple data transformation and normalization process; here we've represented it as a mapping table, illustrating only a small subset of the mappings. This visual system of stars on eBay relies, in part, on the assumption that users will know that a red shooting star is a better rating than a purple star. But we have our doubts about the utility of this representation for buyers. Iconic scores such as these often mean more to their owners, and they might represent only a slight incentive for increasing activity in an environment in which each successful interaction equals cash in your pocket.
  • 5. The community feedback rating is a compound claim containing the historical counts for each of the three possible seller feedback ratings-positive, neutral, and negative-over the last 12 months, so that the totals can be presented in a table showing the results for the last month, 6 months, and year. Older ratings are decayed continuously, though eBay does not disclose how often this data is updated if new ratings don't arrive. One possibility would be to update the data whenever the seller posts a new item for sale.

The positive and negative ratings are used to calculate the positive feedback percentage.

  • 6. The positive feedback percentage claim is calculated by dividing the positive feedback ratings by the sum of the positive and negative feedback ratings over the last 12 months. Note that the neutral ratings are not included in the calculation. This is a recent change reflecting eBay's confidence in the success of updates deployed in the summer of 2008 to prevent bad sellers from using retaliatory ratings against buyers who are unhappy with a transaction (known as tit-for-tat negatives). Initially this calculation included neutral ratings because eBay feared that negative feedback would be transformed into neutral ratings. It was not.

This score is an input into the power seller rating, which is a highly-coveted rating to achieve. This means that each and every individual positive and negative rating given on eBay is a critical one–it can mean the difference for a seller between acquiring the coveted power seller status, or not.

  • 7. The Detailed Seller Ratings community averages are simple reversible averages for each of the four ratings categories: item as described,communications,shipping time,and shipping and handling charges.There is a limit on how often a buyer may contribute DSRs.

EBay only recently added these categories as a new reputation model because including them as factors in the overall seller feedback ratings diluted the overall quality of seller and buyer feedback. Sellers could end up in disproportionate trouble just because of a bad shipping company or a delivery that took a long time to reach a remote location. Likewise, buyers were bidding low prices only to end up feeling gouged by shipping and handling charges. Fine-grained feedback allows one-off small problems to be averaged out across the DSR community averages instead of being translated into red-star negative scores that poison trust overall. Fine-grained feedback for sellers is also actionable by them and motivates them to improve, since these DSR scores make up half of the power seller rating.

  • 8. The power seller rating, appearing next to the seller's ID, is a prestigious label that signals the highest level of trust. It includes several factors external to this model, but two critical components are the positive feedback percentage, which must be at least 98%, and the DSR community averages, which each must be at least 4.5 stars (around 90% positive). Interestingly, the DSR scores are more flexible than the feedback average, which tilts the rating toward overall evaluation of the transaction rather than the related details.

Though the context for the buyer's claims is a single transaction or history of transactions, the context for the aggregate reputations that are generated is trust in the eBay marketplace itself. If the buyers can't trust the sellers to deliver against their promises, eBay cannot do business. When considering the roll-ups, we transform the single-transaction claims into trust in the seller, and–by extension–that same trust rolls up into eBay. This chain of trust is so integral and critical to eBay's continued success that they must continuously update the marketplace's interface and reputation systems.

October 21, 2009

User Motivations & System Incentives

Reputation Wednesday is an ongoing series of essays about reputation-related matters. This week's entry summarizes our model for describing user motivations and incentives for participation in reputation systems.

This is a short summary of a large section of Chapter 6 of our book, Building Web Reputation Systems, entitled Incentives for User Participation, Quality, and Moderation. For this blog post, the content is being shuffled a bit. First we will name the motivations and related incentive models, then we'll describe how reputation systems interact with each motivational category. To read a more detailed discussion of the incentive sub-categories, read the Chapter 6.

Motivations and Incentives for social media participation:

  • Altruistic motivation: for the good of others
    • Tit-for-Tat or Pay-it-Forward incentives: I do it because someone else did it for me first"
    • Friendship incentives: "I do it because I care about others who will consume this"
    • Know-it-All or Crusader or Opinionated incentives: "I do it because I know something everyone else needs to know"
  • Commercial motivation: to generate revenue
    • Direct revenue incentives: Extracting commercial value (better yet, cash) directly from the user as soon as possible
    • Branding incentives: Creating indirect value by promotion - revenue will follow later
  • Egocentric motivation: for self-gratification
    • Fulfillment incentives: The desire to complete a task, assigned by oneself, a friend, or the application
    • Recognition incentives: The desire for the praise of others
    • The Quest for Mastery: Personal and private motivation to improve oneself

Altruistic or Sharing Incentives

Altruistic, or sharing, incentives reflect the giving nature of users who have something to share-a story, a comment, a photo, an evaluation-and who feel compelled to share it on your site. Their incentives are internal: they may feel an obligation to another user or to a friend, or they may feel loyal to (or despise) your brand.

When you're considering reputation models that offer altruistic incentives, remember that these incentives exist in the realm of social norms-they're all about sharing, not accumulating commercial value or karma points. Avoid aggrandizing users driven by altruistic incentives-they don't want their contributions to be counted, recognized, ranked, evaluated, compensated, or rewarded in any significant way. Comparing their work to anyone else's will actually discourage them from participating.

(See more on Tit-for-Tat, Friend, and Know-it-All altruistic incentives.)

Commercial Incentives

Commercial incentives reflect people's motivation to do something for money, though the money may not come in the form of direct payment from the user to the content creator. Advertisers have a nearly scientific understanding of the significant commercial value of something they call branding. Likewise, influential bloggers know that their posts build their brand, which often involves the perception of them as subject matter experts. The standing that they establish may lead to opportunities such as speaking engagements, consulting contracts, improved permanent positions at universities or prominent corporations, or even a book deal. A few bloggers may actually receive payment for their online content, but more are capturing commercial value indirectly.

Reputation models that exhibit content control patterns based on commercial incentives must communicate a much stronger user identity. They need strong and distinctive user profiles with links to each user's valuable contributions and content. For example, as part of reinforcing her personal brand, an expert in textile design would want to share links to content that she thinks her fans will find noteworthy.

But don't confuse the need to support strong profiles for contributors with the need for a strong or prominent karma system. When a new brand is being introduced to a market, whether it's a new kind of dish soap or a new blogger on a topic, a karma system that favors established participants can be a disincentive to contribute content. A community decides how to treat newcomers-with open arms or with suspicion. An example of the latter is eBay, where all new sellers must "pay their dues" and bend over backward to get a dozen or so positive evaluations before the market at large will embrace them as trustworthy vendors. Whether you need karma in your commercial incentive model depends on the goals you set for your application. One possible rule of thumb: If users are going to pass money directly to other people they don't know, consider adding karma to help establish trust.

(See more on Direct revenue and Branding commercial incentives.)

Egocentric Incentives

Egocentric incentives are often exploited in the design online in computer games and many reputation based web sites. The simple desire to accomplish a task taps into deeply hard-wired motivations described in behavioral psychology as classical and operant conditioning (which involves training subjects to respond to food-related stimulus) and schedules of reinforcement. This research indicates that people can be influenced to repeat simple tasks by providing periodic rewards, even a reward as simple as a pleasing sound.

But, an individual animal's behavior in the social vacuum of a research lab is not the same as the ways in which we very social humans reflect our egocentric behaviors to one another. Humans make teams and compete in tournaments. We follow leaderboards comparing ourselves to others and comparing groups that we associate ourselves with. Even if our accomplishments don't help another soul or generate any revenue for us personally, we often want to feel recognized for them. Even if we don't seek accolades from our peers, we want to be able to demonstrate mastery of something-to hear the message "You did it! Good job!"

Therefore, in a reputation system based on egocentric incentives, user profiles are a key requirement. In this kind of system, users need someplace to show off their accomplishments-even if only to themselves. Almost by definition, egocentric incentives involve one or more forms of karma. Even with only a simple system of granting trophies for achievements, users will compare their collections to one another. New norms will appear that look more like market norms than social norms: people will trade favors to advance their karma, people will attempt to cheat to get an advantage, and those who feel they can't compete will opt out altogether.

Egocentric incentives and karma do provide very powerful motivations, but they are almost antithetical to altruistic ones. The egocentric incentives of many systems have been over-designed, leading to communities consisting almost exclusively of experts. Consider just about any online role playing game that survived more than three years. For example, to retain its highest-level users and the revenue stream they produce, Worlds of Warcraft must continually produce new content targeted at those users. If they stop producing new content for their most dedicated users, their business will collapse. This elder game focus stunts WoW's growth -- parent company Blizzard has all-but-abandoned improvements aimed at acquiring new users. When new users do arrive (usually in the wake of a marketing promotion), they end up playing alone because the veteran players are only interested in the new content and don't want to bother going through the long slog of playing through the lowest levels of the game yet again.

(See more on Fulfillment, Recognition, and Quest-for-Mastery egocentric incentives.)

October 14, 2009

A Case Study: Yahoo! Answers Community Moderation

Reputation Wednesday is an ongoing series of essays about reputation-related matters. This week's entry announces two important milestones.


We are proud to announce that our Chapter 12 Case Study—Yahoo! Answers Community Content—is now available for review! This chapter is a doozy. Using the structure and guidance from the rest of the book, it attempts to describe, in detail, a project that has saved Yahoo! millions of dollars in customer care costs (and produced a stronger, more content-vibrant community in the process.) No excerpts here. It's all good stuff—go read it.

And, coinciding with this draft chapter release, Randy and I can also announce that we've achieved an important milestone for the book: draft complete status. Our editor Mary blessed it on Monday. We're expecting feedback from our early reviewers soon that will dictate the tempo and scope of re-writes, so… stay tuned! We will, of course, continue to blog here and stick faithfully to our Reputation Wednesday schedule.

Whew.

October 13, 2009

Wikimania Redux Tonight at BayCHI

For those of you in the Bay Area, we heartily recommend tonight's BayCHI gathering, a redux presentation of several talks from Wikimania 2009. BuildingRep friend and Yahoo! Director of UX Micah Alpern will be re-presenting his talk on Yahoo! Answers Community Moderation. (We posted it here a couple of weeks ago.)

This same project also provides an indepth case-study for the closing chapter of our book, and provides loads of real-world examples of useful metrics, ROI and the practical and social effects of reputation systems. If you can make it to Micah's talk tonight, at all, we encourage you to do so. Time and location information here.

October 07, 2009

The Dollhouse Mafia, or "Don't Display Negative Karma"

Reputation Wednesday is an ongoing series of essays about reputation-related matters. This week's essay explains why publicly displayed user reputation (karma) is a very bad idea. It is excerpted from Chapter 7: Objects, Inputs, Scope, and Mechanism.

Because an underlying karma score is a number, product managers often misunderstand the interaction between numerical values and online identity. The thinking goes something like this:

  • In our application context, the users' value will be represented by a single karma, which is a numerical value.
  • There are good, trustworthy users and bad, untrustworthy users, and everyone would like to know which is which, so we will display their karma.
  • We should represent good actions as positive numbers and bad actions as negative, and we'll add them up to make karma.
  • Good users will have high positive scores (and other users will interact with them), and bad users will have low negative scores (and other users will avoid them).

This thinking—though seemingly intuitive—is impoverished, and is wrong in at least two important ways.

  • There can be no negative public karma-at least for establishing the trustworthiness of active users. A bad enough public score will simply lead to that user's abandoning the account and starting a new one, a process we call karma bankruptcy. This setup defeats the primary goal of karma-to publicly identify bad actors. Assuming that a karma starts at zero for a brand-new user that an application has no information about, it can never go below zero, since karma bankruptcy resets it. Just look at the record of eBay sellers with more than three red stars-you'll see that most haven't sold anything in months or years, either because the sellers quit or they're now doing business under different account names.
  • It's not a good idea to combine positive and negative inputs in a single public karma score. Say you encounter a user with 75 karma points and another with 69 karma points. Who is more trustworthy? You can't tell: maybe the first user used to have hundreds of good points but recently accumulated a lot of negative ones, while the second user has never received a negative point at all. If you must have public negative reputation, handle it as a separate score (as in the eBay seller feedback pattern).

Even eBay, with the most well-known example of public negative karma, doesn't represent how untrustworthy an actual seller might be-it only gives buyers reasons to take specific actions to protect themselves. In general, avoid negative public karma. If you really want to know who the bad guys are, keep the score separate and restrict it to internal use by moderation staff.

Virtual Mafia Shakedown: Negative Public Karma

The Sims Online was a multiplayer version of the popular Sims games by Electronic Arts and Maxis in which the user controlled an animated character in a virtual world with houses, furniture, games, virtual currency (called Simoleans), rental property, and social activities. You could call it playing dollhouse online.

One of the features that supported user socialization in the game was the ability to declare that another user was a trusted friend. The feature involved a graphical display that showed the faces of users who had declared you trustworthy outlined in green, attached in a hub-and-spoke pattern to your face in the center.

People checked each other's hubs for help in deciding whether to take certain in-game actions, such as becoming roommates in a house. Decisions like these are costly for a new user – the ramifications of the decision stick with a newbie for a long time, and backing outof a bad decision is not an easy thing to do. The hub was a useful decision-making device for these purposes.

That feature was fine as far as it went, but unlike other social networks, The Sims Online allowed users to declare other users un trustworthy too. The face of an untrustworthy user appeared circled in bright red among all the trustworthy faces in a user's hub.

It didn't take long for a group calling itself the Sims Mafia to figure out how to use this mechanic to shake down new users when they arrived in the game. The dialog would go something like this:

"Hi! I see from your hub that you're new to the area. Give me all your Simoleans or my friends and I will make it impossible to rent a house.”

"What are you talking about?"

"I'm a member of the Sims Mafia, and we will all mark you as untrustworthy, turning your hub solid red (with no more room for green), and no one will play with you. You have five minutes to comply. If you think I'm kidding, look at your hub-three of us have already marked you red. Don't worry, we'll turn it green when you pay…"

If you think this is a fun game, think again-a typical response to this shakedown was for the user to decide that the game wasn't worth $10 a month. Playing dollhouse doesn't usually involve gangsters.

Avoid public negative reputation. Really.


October 05, 2009

Do Trolls Eat Spam?

What's the difference between trolling behavior and plain old spam? It's a subtle distinction, but critical to understanding when combating either. We classify as spam those communications that are indiscriminate, unwanted and broadcast to a large audience.

Fortunately, those same characteristics that mark a communication as spam—its indiscrimination, its lack of relation to the conversation at hand, its overtly commercial appeals—also make it stand out. You're probably easily able to identify spam on sight, after just a quick initial inspection. We can teach these same tricks to machines. Although spammers constantly change their tactics to evade detection, spam can generally be detected by machine methods.

Trollish behavior, however, is another matter altogether. Trolls may not be motivated by financial motives—more likely, they're craving attention, and motivated by a desire to disrupt the larger conversation. (See egocentric incentives.) Trolls quickly realize that the best way to accomplish these goals are by non-obvious means. An extremely effective means of trolling, in fact, is to disguise your trollish intentions as real conversation.

Accomplished trolls can be so subtle that even human agents would be hard-pressed to detect them. In Chapter 7, we discuss a kind of subtle trolling in a sports context: masquerading as a fan of the opposing team. For these trolls, presenting themselves as faithful fans is part of the fun—then it's all the more disruptive once they started to trash-talk "the home team."

How do you detect for that?

It's hard for a human, and nigh-impossible for a machine. It is, however, easier to do with a number of humans. By adding consensus, and reputation-enabled methods, it is easier to reliably discern trollish behavior from sincere contribution to the community. Because reputation systems, to some degree, reflect the tastes of the community, they also have a better-than-average chance at catching behavior that transgresses against those tastes.