Corporate Ratings Abuse and What to Do About it
|Dilbert ©2009, United Feature Syndicate, Inc.|
The reputation of the Ratings and Reviews pattern of reputation systems has been taking more hits lately:
- The Daily Background reported: Belkin’s Development Rep is Hiring People to Write Fake Positive Amazon Reviews using Amazon's Mechanical Turk. The going rate was $0.65 each.
- David Pogue soon after wrote Carbonite Stacks the Deck on Amazon as employees wrote positive reviews without identifying themselves as such. This is the account that probably inspired the Dilbert cartoon for Feb. 1, 2009.
Corporate reputation system (specifically Ratings and Reviews) abuse isn't new. See Is Harriet Klausner for real? which deals with an impossibly profuse review writer and Merchants angry over getting yanked by Yelp which details one mitigation technique to shut down review swapping by business owners.
It's safe to say, as long as there is money to be made, all reputation systems - not just Ratings and Reviews - will be subject to this sort of manipulation attempt. The incentive for the corporate user or spammer is clear. Think about it, writing shill reviews is probably significantly cheaper and more effective than sending spam email, at least for now. Yahoo! suffers from an annual attack of this form from November to December as recounted in PriceRitePhoto: Abusive Bait and Switch Camera Store where Thomas Hawk explained:
One of the things that troubles me the most about this situation is that I found this retailer through Yahoo! shopping and they were perceived to have positive feedback. Is the feedback mechanism for Yahoo! Shopping broken? How could this horrible retailer have a four star rating with 858 ratings? I’m convinced that there is a possibility that many of the “reviews” for this company could be fake. I should though have sorted through the reviews to the worst to see that many others had fallen prey to similar fraud by this company.
Oh Thomas, you were right that the good reviews were nearly all fake. There are a dozen or more slimy NYC electronics merchants that have control of hundreds of accounts and they all rate and review each others sites fraudulently. The bad guys have it down to a science and sites like Yahoo! have to detect and remove these abuses every year - something they aren't always very good at.
Publicists and creators write reviews of books and movies that they're promoting. eBay sellers refund purchase prices and even pay cash to get users to remove negative feedback.
Wow, that's a lot of abuse
You almost wonder why people trust these systems at all, but they do. [I'm looking for newer data as I suspect recent increase in the reporting of abuse might erode this confidence, if you have newer info, please comment below.] Clearly, though, it should be a priority to prevent, or at least detect and repair any abuse that a site may have - especially if your brand is associated with higher quality reputation it can translated to higher revenue.
Mitigation TechniquesIn our book and wiki, we will go into some detail about specific abuse mitigation techniques, but here's a quick summary of some techniques that have helped with properties at Yahoo! and elsewhere:
- Strengthen Identity
- Require registration to rate and review. Period. Even then cheap identity systems such Yahoo and HotMail, where you can get an email address in a few seconds are at the root of great problems, but once you have an account, you can build up identity strength by having all of the user's significant content interactions attached. Their social network, contacts list, high scores, shared media, profile customization, content contributions, saved preferences and more are time consuming to set up, establishing a switching-cost. The threat of losing this work is deterrent to abuse. The strength of identity can be used to Weight Average Ratings (see below)
- Establish Karma
- Attaching User Reputation, or Karma to a user provides explicit numeric values that can be used with the other techniques outlined here. See Chapter 8 for a detailed discussion of this topic.
- Report only established averages
- Reporting something an average 5-stars when there is only one rating is ridiculous. Apply a minimum count before showing the average. You could also do as Amazon does, and surface the distribution of scores. This raises the barrier to entry for review abuse.
- Weight Average Ratings
- Simply put, more trusted users get more say in the average. New users get (almost) no say, reviews written all on the same day or from the same IP addresses get devalued, etc. Facebook Connect IDs get treated as if they are real people, etc.
- Apply a Heavy Hammer
- If you detect an abusive account enough delete it, make sure to delete all of its ratings and reviews. Assume they are all tainted. Recalculate all affective averages This is critical to deter an abuser, turning every false review into a Russian-roulette trigger-pull: Will this one kill the account and all the work done?
- Community Content Suspension
- You can't have eyes everywhere, but your users do. If you implement a system to hide content based on trusted user reports, as Yahoo! Answers did, you can at least get rid of the most obvious stuff nearly instantaneously.
None of these are sure-fire - many of them are just stop-gap techniques until you can build more effective solutions, such as Community Content Suspension or the Heavy Hammer.