The Reputation Framework

The reputation framework is the software that forms the execution environment for reputation models. The chapter will take a deeper and much more technical look at the framework. The first section is intended for software architects and technically minded product managers to generate appropriate requirements for implementation and possible reuse by other applications.

The second section of the chapter will describe, in detail, two different reputation frameworks with very different sets of requirements: The Invisible Reputation Framework and The Yahoo! Reputation Platform.

This chapter stands alone and talks about messaging systems, databases, performance, scale, reliability, etc. and can be safely skipped if the reader is not interested in such gory internals.

Reputation Framework Requirements

This section will help you identify the requirements for your reputation framework. As with all projects, the toughest requirements can be stated as a series of trade-offs. Below find a list of questions to ponder as we dive into each trade-off in greater detail including discussion of some known techniques for addressing each. When selecting the requirements for your framework, be certain that they consider its total lifetime; meeting the needs at the beginning of your project and going forward, as your application grows and becomes successful.

Keep in mind that your first reputation model may be just one of several to utilize your reputation framework. Also, your reputation system is only a small part of an application…it shouldn't be the only part.

  • Are your reputation calculations static or dynamic? Static means that you can compute your claim values on a go-forward basis, without having to access all previous inputs. Dynamic means the opposite…that each input and or each query will re-generate the values from scratch.
  • What is the scale of your reputation system…small or huge? What is the rate of inputs per minute? How many times will reputation scores be accessed for display or use by your application?
  • How reliable must the reputation scores be…transactional or best-effort?
  • How portable is the data…should the scores be shared with other applications or integrated with their native application only?
  • How complex is the reputation model…complicated or simple? If it is currently simple, will it stay that way?
  • Which is more important, getting the best possible response immediately, or a perfectly accurate response as soon as possible? Or, more technically phrased: What is the most appropriate messaging method…Optimistic/Fire-and-Forget or Request-Response/Call-Return?

Calculations: Static vs. Dynamic

There are significant trade-offs in the domain of performance and accuracy when considering how to record, calculate, store, and retrieve reputation events and scores. Some static models have scores that need to be continuous and real-time…they need to be as accurate as possible at any given moment. An example would be spammer' IP reputation for industrial and scale email providers. Others may be calculated in batch-mode, because a large amount of data will be consulted for each score calculation.

Dynamic reputation models have constantly changing constraints:

  • Variable contexts: The data considered for each calculation is constrained differently for each display context. This use is common in social applications, such as Zynga's popular Texas HoldEmPoker, which displays friends-only leader boards.
  • Complex multi-element relationships: The data calculations effect each other in a non-linear way, such as search relevance calculations like Google's Page-Rank. Recommender Systems are also dynamic data models…typically a large portion of the data set is considered to put every element in a multi-dimensional space for nearest-neighbor determination. This allows the display of “People like you also bought…” entities.
Static: Performance, Performance, Performance

Very large applications require up-to-the-second reputation statements available to any reading applications at incredibly high rates. For example, a top email provider might need to know a spammer reputation for every IP address for email messages entering the service in real time! Even if that's cached in memory, when that reputation changes state, say from non-spammer to spammer, instant notification is crucial. There's just no time to use the traditional database method of recalculating the reputation over and over again from mostly unchanged data. By static we mean roll-forward calculations…every reputation input side-effects the roll-ups in such a way that they contain the correct value at the moment and contain enough state to continue the process for the next input.

Dynamic: Reputation Within Social Networks

Given the description above, one might be tempted to select the Dynamic requirement for a framework because it provides the enables the greatest range of reputation models possible. There is a serious cost to this option…it increases the cost of implementing, testing, and most importantly scaling your framework. Static reputations are easier to understand, implement and test, and scale better. Look at the history of Twitter.com's Fail Whale…a direct result of the requirement for a dynamic custom display of tweets for each and every user. Dynanism is costing them a fortune.

When possible, find ways to simplify your model, either by adding more specific contexts or by reducing dimensions through some clever math. This book won't cover the many forms of dynamic systems in any depth, as many of the algorithms are well-covered in academic literature. See the after matter for pointers to various document archives.

Scale: Large vs. Small

Scale…or the number of inputs (and the database writes they ultimately generate) and reputation claim value reads…is probably the most important factor when considering any reputation system implementation.

Small scale is any transaction rate such that a single instance of a database can handle all reputation related reads and writes without becoming overloaded and falling behind.

Large scale requires additional software services to support reading and storing reputation statements. At the point where your reputation system grows large enough to require multiple databases (or distributing the processing of inputs to multiple machines) then you probably need a distinct reputation framework. It becomes just too complicated for the high-level application to manage the reputation data model.

At a low enough volume, say less than thousands of reads per minute and only hundreds of writes, it is easy to think that simple database API calls mixed directly into the application code as needed would be the best approach for speed of development. Why bother adding a layer to isolate the reputation system access to the database? Consider that your application may not always stay small. What if your application is a hit and suddenly you need multiple databases or more reputation frameworks to process all the inputs? This has happened to more than a few web databases over the years, and on occasion-such as when Ma.gnolia ended up losing all it's users data-it can be fatal to the business.

So, even for small scale applications, we recommend a clean, modular boundary between the application and the reputation system…all inputs should use a class or call a library that encapsulates the reputation process code and the database operations. That way, the calculations are centralized and the framework can be incrementally scaled. See Appx_A-The-Invisible-Framework for a specific example of this practice.

Reliability: Transactional vs. Best-Effort

Reputation claims are often critical to the success of an application, commercial or otherwise. So it would seem to follow naturally that it should be a requirement that the value of the claims should be 100% reliable. This means that every input's effect on the reputation model should be reflected in the calculations. There's nothing more valuable than the application's contributions to the bottom line, right?

There's another reason you'd like reputation roll-ups to be reliable, especially when users provide the claims that are combined to create them: sometimes users abuse the system, often this abuse comes in great volume if the scores have financial value to the end-users themselves (See Chap_4-eBay_Merchant_Feedback_Karma . If scores are reliable and tracked on a transactional basis, the effects of the abuse can be reversed. If a user is determined to abuse the reputation system, often the correct action is to reverse all of their inputs on every target that they ever evaluated. This is the Undo anything feature. This requires a reputation framework that is optimised to support reversible inputs and a database that is indelible by source identifier only.

So, if it's good for business and its good for abuse mitigation, why shouldn't my reputation system require transactional-style reliability throughout? The answer is a single word: performance.

Every reversible input adds at least one additional database read-modify-write to store the event. That at least doubles the database (and messaging network) load. If the database is locked for each input, high-transaction rate systems can create a severe bottleneck waiting for resources to become available. Not only that, it at least doubles the size of the database, and depending on the input-to-output ratio, may increase it by an order magnitude or more.

For some applications, say Ratings and Reviews (see Chap_4-User_Reviews_with_Karma ), storing all the inputs is already a part of the application design, so these costs must be met. But when you look at top-100 website or online game transaction rates, it quickly becomes cost-prohibitive to store everything everyone does in an indexed database.

When Transaction Rates Go Stratospheric: Yahoo! IP-Based Spammer Reputation

Approximate can be good enough!

For applications that have continuous input, the reputation model can be designed to deal with best-effort message delivery. Yahoo!'s reputation model for identifying spammer IP addresses takes user inputs in the form of “This Is Spam” votes and also receives a continuous stream of volume data: the total email received per IP address per minute. There are hundreds of thousands of mail server IP addresses and at any moment, tens of thousands of them are sending mail to Yahoo! mailboxes. The user inputs are several orders of magnitude less common than the incoming traffic data.

It was clear immediately that the internal messaging infrastructure would quickly get overwhelmed if every reputation input message had to be sent with guaranteed delivery. Instead, the much lighter overhead and much quicker protocol was selected to make best-effort delivery for these messages, while the user actions were sent using a slower but more reliable channel. The model was designed so that it would keep a windowed history of traffic data for each target IP address and keep a floating average volume for calculations. This allowed the model to have a sense of time as well as make it insensitive to the occasional dropout of traffic data. Likewise the inputs never had to be reversed, so were never stored in deployment. During the testing, tuning, and debugging phase, the traffic data was stored in order to verify the model's results.

Why not just design all of the models to be compatible with best-effort delivery then? Because it becomes a much more difficult task with user inputs. In many applications, the user evaluations took considerable effort to construct (i.e. Reviews) or are a part of the user's identity (i.e. Favorites) and can't just disappear: if this ever approaches being a common occurrence, you risk driving the customer away. This is also true for roll-up reputation that is attached to user's objects…their karma scores or ratings for their eBay shop or reviews of their favorite objects. People track their scores like hawks and become upset if ratings are incorrectly calculated.

Best-effort message delivery also makes reversible reputation math very messy…because of lost messages, counters can go negative, go out of range, and scores can get stuck inappropriate values given the number of reported inputs. For example: “0 users say this movie is 5-stars!”

Lastly, losing inputs in a time-critical application, such as the one described in Chapter_10 could increase operational costs.

There is a compromise: if you build a reputation framework that supports reversible transactions, and only use that feature judiciously in your models, you can have the best of both worlds.

Model Complexity: Complex vs. Simple

Reputation models often start out as simple as computers can compute: store a number, starting at 0 and periodically add 1 to it and update the record. It seems there is no need for a reputation framework in this case beyond that provided by the current application environment and that of the database.

In practice, however, reputation systems either evolve or fail. Reputation is used to increase the value of your entities. So, either the model succeeds at increasing value or it needs to be abandoned or changed to reflect the actual use pattern that will achieve this goal. This usually involves making the system more complex, for example tweaking it to include a more subtle set of inputs. On the other hand, if it is successful, it may well become the target of manipulation by those who directly benefit from such actions: The business owners on Yelp formed collectives to cross-review each other's businesses and increase their rankings: see Merchants angry over getting yanked by Yelpby Ellen Lee,Anastasia Ustinova in Friday the July 4, 2008 edition of the San Francisco Chronicle. Even when successful, reputation models must constantly evolve.

So, whether a reputation model starts as complex or succeeds sufficiently to become complex, the reputation framework should be designed to assume both: that models may contain several, perhaps dozens of reputation processes…and more importantly that the model will change over time. This suggests that version control for both the framework and the models that execute within.

As the number of processes in a model increases, there are performance trade-offs based on other requirements on the framework. If the entire framework is transactional…say by wrapping a complex process chain with a database lock…though reliable, this will lead to increased lock contention and may significantly decrease throughput.

It may also possible to do static graph analysis of reputation models to combine processes and parallelize execution, in effect reducing the complexity before execution to improve performance.

Data Portability: Shared vs. Integrated

Should reputation statements be kept separate on a per-application or per-context basis or be made more available for other uses? Intuitively, from both a simplified security view and an incremental development approach, the answer would be yes- isolate the data and integrate the reputation framework in-line with the application code, which has appropriate access to the database as well has a code-level understanding of the source and target objects. For single-application environments, data portability seems like a nice-to-have feature, something that the team will get to in Version 1.x.

While it is true that reputation claims are always most valuable in the context in which they were generated, sometimes the context stretches across multiple applications…for example, Yahoo! Shopping and Yahoo! Music both benefit from users evaluating new music releases…either at time of purchase or even when listening to them via real-time stream.

Shared context is the primary reason reputation frameworks should treat targets as foreign keys: identifiers in an external database. But it also means that the reputation statement store should be accessible by multiple applications. This exact situation comes up more often than you might think, and one-off solutions for sharing or duplicating data across applications quickly become cost prohibitive. When Yahoo! Local and Yahoo! Travel decided that they should share their hotel and restaurant reviews between the applications, they incurred a large cost to merge the data, since they each used different target identifiers for the same real-world business.

Even if you don't yet have multiple applications in development, plan for success: Create sharable object identifiers that can be used by new applications, and make the schema portable…it doesn't take any longer and keeps your options open.

Even with shared source and target identifiers, you might be tempted to think that the reputation framework should be developed all in-line with the initial application. For example, you might presume that a master application would be the only one that would want to calculate the reputations and store them…all ancillary apps would be read-only. Many systems are structured this way. This sounds great, but it now creates a dependency on the master application development team whenever the reputation model needs to change to accommodate a new input or new interim score or to re-tune based on anomalous behavior perhaps introduced as the result of a new use pattern.

In Chap_6-Inferred_Karma we discuss the pattern of inferred karma, where reputation is borrowed from other contexts to create an interim reputation that can be used in the case where no context-specific claim is available or yet trustworthy. As detailed in, when Yahoo! Answers borrowed membership reputation, IP reputation, and browser cookie reputation to use as a reason to trust someone's report of abusive content, the creators of those component reputations had no inkling that they'd be used this way.

This mixing of external scores may seem to be a contradiction of the “reputation is always in context” rule, but it doesn't have to be, as long as each re-use maintains all, or most, of the original context's meaning. It does mean something significant that a Yahoo! account is months or years old compared to one created today. The same thing goes for the age of a browser cookie, or abuse history on an IP address. None of these components are reliable alone, nor should any of them be used to make serious decisions about the reliability of a user, but combined they might just give you enough information to make the user's experience a little better.

A reputation framework that allows multiple applications to read and write reputation statements to each other's contexts can lead to serious data problems…a bug in one model can damage the execution of another. For such environments a security model and/or a disciplined development process is strongly suggested.

See the detailed example in Appx_A-The_Yahoo_Reputation_Platform for the description of a specific implementation.

Optimistic Messaging vs. Request-Reply

The reputation framework really has only two interfaces to the application environment: a channel for routing inputs to an executing model, and a method for sending a signal that something interesting happened in the model. In the vast majority of cases, reputation scores are not output in the classical sense, but stored for later retrieval by application code.

This approach means that the reputation framework can be isolated from the application for performance and security reasons. One way to think of it is like a special database pre-processor…for turning inputs into database records.

Given the narrow input channel, the framework implementor may chose between several messaging models. Certainly programmers are most familiar with request-reply messaging - most everything works that way…programs call functions which return results when complete and even the web can make your computer wait up to a minutes for the server to reply to a HTTP GET request. It seems logical for an application to send a reputation input, such as a star-rating for a movie to the reputation model and wait for confirmation of the new average score to return. Why bother with any other messaging model?

Performance at scale. Many inputs, especially those that are not reflected back to the user, should be sent without waiting for acknowledgement. This is called optimistic (or asynchronous) messaging, colloquially known as fire-and-forget. This makes the application much faster as the application no longer has to wait for network round trip plus all of the database accesses required by the reputation model. The load instead shifts the burden to the messaging infrastructure and the machines that process the messages. This is how large scale applications work…send messages into a cloud of computers that can be expanded and reconfigured to handle increasing traffic.

Note that request-reply can be added as a lightweight library on top of a optimistic messaging service…pass a callback handle to the model which will get triggered when the result is complete. Application developers would use this wrapper sparingly, only when they needed an immediate new result. Generally, this is rare. For example, when a user writes a review for a hair-dryer, they don't see the effects on the roll-up score immediately - instead they're looking at the review they just submitted. In fact, there are abuse mitigation reasons that delaying the display of the roll-up might be a good idea.

Even if it seems that you will never need optimistic messaging because you'll never have that many inputs, we urge that your reputation framework (even if it is just a local library) provide the semantics of messaging (i.e. SendReputationEvent() and SendReputationEventAndWait() ) even if it is implemented as a straightforward function call underneath. This is low cost and will allow your applications to run even if you significantly change the operational characteristics of your framework.

Framework Designs

Each trade off made when selecting framework design requirements has implementation inpacts both for the framework itself and constrains the reputation models and applications that utilize it. In order to more clearly illustrate the costs and benefits of various configurations, we now present two reputation sandox designs: the minimalist Invisible Framework and the full scalable Yahoo! Reputation Platform.

Framework DesignCalculationsScaleReliabilityPortabilityMessagingModel Complexity
Invisible Reputation FrameworkDynamic (option)SmallBest-EffortEmbeddedRequest-ReplySimple
Yahoo! Reputation PlatformStaticHugeTransactional (option)SharedOptimisticComplex

The Invisible Reputation Framework: Fast, Cheap, and Out of Control

When most applications start implementing social media, including the gathering of user evaluations or reputation, they just place all of input, processing, and output as code in-line with their application. Need a thumbs-up? Just add it to the page, create a votes table to store an entry for everyone who votes, and do a database query to count the number of entries at display time. It is quick and easy. Then they start using a score like this for ranking search results, they quickly realize that they can't afford to do that many queries dynamically, so they create a background or timed process to re-calculate the reputation for each entity and attach it to that entities record.

And so it goes, feature after feature, one new-use optimization after another until either the application becomes too successful and scaling breaks or the cost-benefit of another integration exceeds the business pain threshold.

Everyone starts this way. It is completely normal.

Requirements

“Time to market is everything…we need reputation (ratings, reviews, or karma) and we need it now. We can fix it later.” This quote summarizes the typical implementation plan for legions of application developers. Get something -anything- working, ship alpha quality code, and promise yourself you will fix it later. This is a surprisingly tight constraint on the design of the invisible reputation framework,which is actually no framework at all!

  • Mostly Dynamic Calculations with Static Results Cache as Optimization
    • When first implementing a reputation system, it seems obvious and trivial - store some ratings and when you need to display the roll-up, do a database query to calculate the average just in time. This is the dynamic calculation method and it allows for some interesting reputation scores, such as Your Friends Rated this 4.5 Stars. When reputation gets used for things such as influencing the search rank of entities, this approach becomes too expensive. Running hundreds of just-in time averages against the database when only the top 10 results are going to be displayed 90% of the time is cost-prohibitive. The typical approach is a dynamic-static hybrid: at a fixed interval, say daily, calculate a dynamic database average for each and every entity and store the roll-up value in a search index for speedy searches tomorrow. When you need a new index for some new calculation, repeat the process. Clearly this can quickly become unwieldy.
  • Small Scale: 100 Transactions/Minute or Less
    • At small scale, no more than one transaction per second or so, the database is not the bottleneck…and even if it becomes one, many commercial scaling solutions are available.
  • Best-Effort Reliability with Ad-Hoc Cleanup
    • Reliability is a secondary consideration when time-to-market is the main driver. Code developed in internet time will be buggy and revised often. It's simply impractical to consider that reputation would be completely accurate. This will require that reputation models themselves will be built to auto-correct obvious errors, such as a roll-up going negative or out-of range, since it will happen.
  • Application Bound Reputation Data
    • The temptation will be to store reputation statements as claim values as closely bound to the target data records as possible. After all, it will make it easier for search ranking and other similar uses. Only one application will read and write the reputation, so there is no need to be worried about future uses, right?
  • Request-Reply messaging direct to database
    • When coding in a hurry, why introduce a new and potentially unfamiliar optimistic messaging system? It adds cognitive, implementation, and messaging overhead. The invisible reputation framework trusts the engineer to code up the database action in-line in the application: The SQL Query requests are sent using the vendor supplied interface, and the application waits dutifully for the reply. This is most familiar and easiest to understand, even if it is the has poor performance characteristics
  • Simple or ad-hoc model support
    • Model? What model? There is no execution environment for the reputation model, which is broken into little pieces and spread throughout the application. Of course, this makes the model code much harder to maintain…future coders have to search through the source to find all the places the model is implemented. If the programmers think ahead, they might break the common functions out into a library for re-use, but only if time allows.
Implementation Details

A typical invisible reputation framework is implemented as in-line code. Taking as example a typical LAMP (Linux, Apache, MySQL, PHP/Perl/Python) installation, this means that the data schema in the reputable entities tables are extended to include reputation claim values, such as average rating or karma points, and a new table is set up to contain any stored/reversible user created reputation statements. The database calls are made directly by the application through a standard PHP MySQL library, such as the MySQLi class.

With this approach, simple ratings and review systems can be created and deployed in a few days, if not hours. It doesn't get any quicker than that.

Lessons Learned

Assuming your application succeeds and grows, taking the absolutely quickest and cheapest route to reputation will always come back to bite you in the end.

Most likely, the first thing to cause trouble is the database…as the transaction rate increases, there will be lock contention on writes and general traffic jams on reads. For awhile the commercial solutions will help, but ultimately decoupling the application code from the reputation framework functionality is required…both for reading and for writing statements.

Another set of challenges…tuning the model, adding more reputation uses, and attendant debugging- become cost prohibitive as your application grows causing time-to-market to suffer significantly. Not planning ahead ends up making you slower than your competition. Again, a little investment in compartmentalizing the reputation framework, such isolating the execution of the model into a library ends up being a great cost-benefit tradeoff in the medium to long term.

Whatever you do compartmentalize!

Reptuation Framework Best Practice: Compartmentalize!

If the development time budget allows only one best-practice recommendation to be applied to the implementation of a reputation framework, we recommend this above all others: Compartmentalize the framework! By that we mean the reputation model, the database code, and the data tables. Yes, you can count that as three things if you must. They are listed in priority order.

The Yahoo! Reputation Platform: Shared, Reliable Reputation at Scale

Yahoo! is a collection of diverse web applications that make it collectively the web site with the largest audience in the world. Over a half-billion different people use a Yahoo! application each month. In a single day, Yahoo! gathers millions of user evaluations, explicit and implicit of reputable entities spread over dozens of topic areas and hundreds of applications. And almost none of it was shared across similar topics or applications. We've already mentioned that only a few years ago did Yahoo! Travel and Yahoo! Local start to share hotel and restaurant reviews…and that one-off integration provided no technical or operational assistance to Yahoo! Music and Yahoo! Shopping when they similarly wanted to share information about user DVD ratings.

Yahoo! Requirements

Outrageously large scaling requirements and sharing were the driving requirements for the Yahoo! Reputation Platform.

Yahoo Reputation Platform Requirements in Force-Ranked Order:

  • Huge scale: 10,000,000,000 transactions/year
    • When Yahoo! started their efforts at building a common reputation infrastructure in the form of a standard platform, the current rate of user ratings entering the Yahoo! Music Experience, their real-time music streaming service had reached one billion ratings per year. This was more than half of the total explicit user ratings they were gathering at the time across all of their sites. The executives were excited by the possibility of this project and how it might be used in the future, when cellphones would become the predominant device for accessing social data and therefore suggested that the minimum input rate should exceed that seen by Yahoo ! Music by at least a factor of 10x…or ten billion transactions per year. That's an average of about 350 transactions per second, but traffic is never that smooth…common peaks would exceed 1,000/second.

That may seem huge, but consider how the database transaction rate might become inflated by another factor of 2x to 10x, depending on the nature of the reputation model's complexity…each stored roll-up could add a (lock)- read-modify-write-(unlock) cycle. Also, using and/or displaying reputation (aka capturing the value) multiplies the number of reads by a significant amount, in lieu of a concrete example, say 5x. So by the time things get down to the database, for a typical model, we could see one hundred billion reads and twenty or thirty billion writes! Clearly scale is the requirement that will impose the most challenges on the framework implementation team.

  • Reputation events sharable across applications and contexts
    • Why bother building a gigantic reputation platform? One of the driving requirements was that reputation information be shared across applications and Yahoo! sites. Travel, Local, Maps and other sites should be able to access and contribute user evaluations to the reputable entities…businesses and services…they all have in common. Shopping, Tech, Coupons and others all have users interacting with products and merchants.

It was clear that the segmentation of Yahoo! applications and sites were not the best contexts for reputation…the properties of the entities themselves…what kind of thing they were, where they were located, and who was interacting with them - these were the real reputation contexts. A common platform was a great way to create a neutral, shared environment for collecting and distributing the user's contributions.

  • Transaction-level reliability used by most models
    • Reputation is a critical component of Yahoo!'s overall value and is used throughout the service: from web search to finding a good hotel to identifying the songs you never want to hear on your personal web-radio station.

This value is so important that it creates the incentive for significant reputation abuse. For example, there is a billion dollar industry, complete with acronym, for capitalizing on design flaws in search algorithms: SEO, or Search Engine Optimization. Likewise, the reputation models that execute in this framework are subject to similar abuse. In fact, for many applications there are so many targets that the number of user evaluations for each is very small, making the abusive manipulation of roll-ups accessible to small groups or even dedicated individuals…see Chap_5-Commercial_Incentives ). Generally, it is not technically possible for software to detect small scale reputation abuse. Instead, social mechanisms are used, typically in the form of a Report Abuse button. The community polices reputation in these cases. Unfortunately, this method can be very expensive since each report requires a formal review by customer care staff to decide if the reputation has been manipulated. And each abuser typically generates dozens if not hundreds of abusive reputation contributions. The math is pretty straightforward…even if abuse is reported for one in a hundred product reviews, at Yahoo! scale it is not cost efficient to have humans screen everything. After identifying a user account or browser cookie or IP address that is abusing the reputation system, there is a requirement that customer care can remove all of the related reputation created by the user. Nothing they created can be trusted and everything is presumed to be damaging, perhaps event multiple entities. This adds a transactional requirement to the framework: Every explicit user input and its side-effected roll-ups must be reversible. It is not enough to just delete the user and the specific event…the reputation model must be executed to undo the damage. Note that this requirement is very expensive and is only used on reputation models that require it. For example, in tracking the user's personal ratings for songs Yahoo! Music Engine may not require reversibility…charging a subscription fee is considered sufficient deterrent to reputation abuse. Also, large volume models that do not surface roll-up reputation to the users may not be transactional. A good example is the Yahoo! Mail IP Spammer reputation model, as described in the patent application, which does not keep a transactional history of individual IP address but does keep the This Is (Not) Spam votes on a per-user basis for abuse management as described above.

  • Complex and interconnected model support
    • Between Yahoo!'s existing understanding of reputation abuse in their current systems as well as the strong requirement to share data between applications, it was clear that the reputation models that would execute on this platform would be more complex than simply counting and averaging a bunch of numeric scores.

There was also an original requirement for modular re-use of hypothetical reputation model components in order to speed up the development and deployment of new applications integrating reputation.

  • Static calculations
    • As a result of the performance constraints imposed by the scaling requirements, it was immediately clear that models running in the framework should execute with linear, O(n), performance characteristics, which precluded considering dynamic computation. There is no time to grab multiple items from the database to recalculate a custom value for every model execution. All calculations would be limited to roll-forward computation…a roll-up must be updatable using fixed number of repository reads, usually exactly one. This means that the platform will not support per-user reputation contexts, such as Your Friends Rated This Entity 4.5. Time limited contexts, such as Last Year, Last Month, etc. could still be implemented as specific static reputation calculations. (See discussion of windowing decay systems in Chap_3-Decay ).
  • Optimistic Messaging System
    • Given that we can anticipate that the database is going to be the choke point for this platform, a request-reply protocol to the reputation system would be unable to deliver on a reliable service level. There are just too many inputs coming into too many complex model instances to be sure how long it will take for the model to run.

Performance demands an asynchronous, optimistic messaging backbone…where messages are delivered by a load balancing queue manager to multiple reputation framework instances to be processed on a first-come, first-served basis. Applications that read reputation claim values directly from the database, not via the framework, will always get the best result possible at that moment. The application developer that is collecting explicit user reputation events, such as ratings is offered several choices to deal with the immediate-reply nature of the optimistic messaging system: 1) If possible (not displaying a roll-up) assume success and echo the user's input directly back to the user as if it had been returned by the platform. 2) Implement a callback in the reputation model, and wrap the asynchronous call with one that waits. Don't forget a timeout in best-effort message delivery environments. 3) Get whatever the current result is from the database and explain to the user, if needed, that it might take a bit longer for their result to be reflected in the roll-up.

Yahoo! Implementation Details

The architecture of the Yahoo! Reputation Platform is published in two recently published patent applications:

  • U.S. Patent Application 11/774,460:Detecting Spam Messages Using Rapid Sender Reputation Feedback Analysis, Libbey, Miles; Farmer, F. Randall; Mohsenzadeh, Mohammad; Morningstar, Chip; Sample, Neal
  • U.S. Patent Application 11/945,911:Real-Time Asynchronous Event Aggregation Systems, Farmer, F. Randall; Mohsenzadeh, Mohammad; Morningstar, Chip; Sample, Neal

The implementation details we describe here for their platform are derived primarily from these documents, interviews with the development teams and the authors' personal experience setting the platform's design requirements.

High Level Architecture

Figure_A-1 renders the Yahoo! reputation framework architectural stack using a traditional layer cake model.Though the onerous scaling requirements make the specifics of this implementation very specialized, at this level of abstraction, most reputation framework models are the same: they each need message dispatching services, an reputation model execution engine with an interface reputation statement relational database service, a method to activate or signal external processes, and a separate high performance query-only interface to the data for application display and other uses, such as search result ranking.

Figure_A-1: Yahoo! Reputation Platform Layers

The specific choices that the Yahoo! Reputation Platform team made for implementing each layer will now be described in detail.

Messaging Dispatcher

Figure_A-2 shows top architectural layer of the framework, a message dispatcher that has the job of accepting inputs, usually in the form of reputation statements (source, claim, and target) transforming them to standard-format messages and sending them on to one of many model execution engines to be processed.

Figure_A-2: Yahoo! Reputation Platform Message Dispatcher

There are three main components of this subsystem: A set of listeners, a single message event queue and message consumers, of which there are two subtypes: dispatch consumers and relay consumers.

  • Input Listeners
    • This is the compatibility layer for messaging. Different listeners can have different semantics. For example some listeners can be implemented to understand asynchronous messages of different data formats, such as XML or JSON. They may be hand-coded function calls to do last-moment data transformations, such as turning a source or target identifier from one context into an identifier from another context. Another common listener enhancement is to add a timestamp to, and/or to write a log of, each message to aide in abuse mitigation or model testing and tuning.

It is important to note that when reputation models interact with each other across contexts, as is often the case with karma systems, the first model will send a message via this message dispatching service instead of making a direct call to a hard-coded entry-point in the second model.

Once normalized into a common format, the messages are then handed off to the message event queue for first-in, first-in processing.

  • Message Event Queue
    • The message event queue holds all of the input messages waiting for processing. This process is tightly optimized and can do load-balancing and other throughput optimizations, not detailed here. There are many books and papers on these methods.
  • Message Consumers
    • Each message will be fetched by a consumer process, typically to be forwarded somewhere else. There can be several types, each with their own semantics, but the main two kinds are Dispatch Consumers and Relay Consumers.
  • Dispatch Consumer
    • The dispatch consumer delivers messages to the model execution engines. Typically there is one dispatch consumer for every known model execution engine. In self-scaling systems, where more resources are brought on line automatically, when a new model execution engine starts up, it might register with a dispatcher which would start a dispatch consumer to start forwarding messages.
  • Relay Consumer
    • When more than one message dispatcher is running, it is often desirable to separate message traffic, and by extension the ability to modify reputation, by context. This provides data modification isolation, limiting possible damage from context-external applications. But, there are specific cases where cross-context modification is desirable…karma being the typical example: multiple contexts contribute reputation to the user-profile context.

In order to allow this cross-context, cross-model, and cross-dispatcher messaging, certain pre-determied messages are consumed by the relay consumer and passed onto other dispatchers. In effect stacking dispatchers as needed. This way the Yahoo! Movies ratings and review system can send a message to the Yahoo! Profiles karma model without knowing the address for the dispatcher…it can just send a message to the one for its own framework and know that the message will get relayed to the appropriate servers. Note that a registration service, such as the one described for the dispatch consumer is required to support this functionality. There can be many message dispatchers deployed and this layer is a natural location to provide any context-based security that may be required. Since changes to the reputation database only come by sending messages to the reputation framework, limiting application access to the dispatcher that knows the names and addresses of the context-specific models makes sense. As a concrete example: only Yahoo! Travel and Local had the keys needed to contact, and therefore make changes to, the reputation framework that ran their shared model, but any other company property could read their ratings and reviews using the separate reputation query layer, see Appx_A-Reputation_Query_Interface .

The Yahoo! Reputation Platform's dispatcher implementation was optimistic…all application API calls return immediately without waiting for model execution. The messages were stored with the dispatcher until they could be forwarded to a model execution engine.

The transport services used to move messages to the dispatcher varied by application - most were proprietary high-performance services. A few models, such as Yahoo! Mail's Spam IP reputation accepted inputs on a best-effort basis, which uses the fastest available transport service.

The Yahoo! Reputation Platform high-level architectural layer cake shown in Figure_A-1 contains all the required elements of a typical reputation framework. New framework designers would do well to start with that design and design/select implementations for each component to meet their requirements.

Model Execution Engine

Figure_A-3 shows the heart of the reputation framework, the model execution engine, which manages the reputation model processes and their state. Messages from the dispatcher layer are passed into the appropriate model code for immediate execution. The model execution engine reads and writes its state, usually in the form of reputation statements via the reputation database layer. See Appx_A-Reputation_Repository . Model processes run to completion, and if cross-model execution or optimism is desired, may send messages to the dispatcher for future processing.

The diagram also shows that models may use the external event signaling system to notify applications of changes in state. See Appx_A-External_Signaling_Interface .

Figure_A-3: Yahoo! Reputation Platform Model Engine

This platform gets much of its performance from parallel processing, and the Yahoo! reputation platform uses this approach by implementing an Engine Proxy that routes all incoming message traffic to the engine that is currently running the appropriate model in a concurrent process. This proxy is also in charge of loading and initializing any model that is not currently loaded or executing.

The Yahoo! reputation platform implemented models in PHP with many of the messaging lines within the model diagram implemented as function calls instead of higher-overhead messages. See Appx_A-Your_Mileage_May_Vary for a discussion of the rationale. The team chose PHP mostly due to their personal expertise and tastes…there was no particular technical requirement that drove this choice.

External Signaling Interface

In optimistic systems, such as the Yahoo! reputation platform, output happens passively…the application has no idea when a change happened or what the results were of any input event. Some unknown time after the input, a query to the database may reflect a change, or may not. In high volume applications, this is a very good thing…it is just impractical to wait for every side effect of every input to propagate across dozens of servers. But, when something important (read valuable) happens, such as an IP address switching from good-actor to Spammer, the application needs to be informed ASAP.

This is accomplished by using an external signaling interface. For smaller systems, this can just be hard-coded calls in the reputation model implementation. But larger environments normally have signalling services in place which typically log signal details and have mechanisms for executing processes that take actions, such as changing user access or contacting supervisory personnel.

Another kind of signaling interface can be used to provide a layer of request-reply semantics to an optimistic system: when the model is about to complete, a signal gets sent to a waiting thread that was created when the input was sent. The thread identifier is sent along as a parameter throughout the model as it executes.

Reputation Repository

On the surface, the reputation repository layer looks like any other high-performance, partitioned, and redundant database. The specific features for the repository in the Yahoo! reputation platform are:

Like the other layers, the repositories may themselves be managed by a proxy manager for performance.

The reputation claim values may be normalized by the repository layer so that the those reading the values via the query interface don't have to know the input scale.

To improve performance, many read-modify-write operations, such as increment and addToSum are implemented as stored procedures at the database level, instead of being code-level mathematic operations at the model execution layer. This significantly reduces interprocess message time as well as the duration of any lock contention on highly modified reputation statements.

The Yahoo! reputation platform also contains features to dynamically scale up by adding new repository partitions (nodes) and cope gracefully with data migrations. Though those solutions are proprietary, we mention them here for completeness and consideration by anyone contemplating such a framework.

Reputation Query Interface

The main purpose for all of this infrastructure is to provide speedy access to the best possible reputation statements for diverse display and other corporate use patterns. Thereputation query interface provides this service. It is separated from the repository service because it provides read-only access, and the data access model is presumed to be less restrictive. For example, every Yahoo! application could read user karma scores, even if they could only modify it via their own context-restricted reputation model. Large scale database query service architectures are well understood and well documented on the web and in many books. Framework designers are reminded that the number of reputation queries in most applications number one or two orders of magnitude larger than the number of changes. Our short treatment of the subject here does not reflect the relative scale of the service.

Yahoo used context-specific entity identifiers (often in the form of database foreign keys) as source and target IDs…so, even though Yahoo! Movies might have permission ask the reputation query service for a user's restaurant reviews, it might do them no good without a separate service from Yahoo! Local to map the reviews' Local-specific target ID back to a data record describing the eatery. The format used is context.foreignKeyValue , the reason for the context. is to allow for context specific wild-card search (see below). There is always at least one special context: user. which holds karma. In practice, there is also a source-only context: roll-up. used for claims that aggregate the input of many sources.

Claim type identifiers are of a specific format-context.application.claim . An example is YMovies.MovieReviews.OverallRating to hold the claim value for a user's overall rating for a movie.

Queries are of the form: Source: [SourceIDs], Claim: [ClaimIDs], Target: [TargetIDs] . Besides the obvious use of retrieving a specific reputation statement, the identifier design used in this platform supports wild-card queries (*) to support various multiple return results:

  • Source:*, Claim: [ClaimID], Target: [TargetID]
    • Returns all of a specific type of claim for a particular target. Example: All of the reviews for the movie Aliens.
  • Source: [SourceID], Claim: context.application.*, Target: *
    • Returns all of the application-specific reputation statements for any targets that by a source. Example: All of Randy's ratings, reviews, and helpful votes on other user reviews.
  • Source: *, Claim: [ClaimID], Target: [TargetID, TargetID, …]
    • Returns all reputation statements with a certain claim type of multiple targets. The application is the source of the list of targets, such as a list of otherwise qualified search results. Example: What have users given as overall ratings for the movies that are currently in theaters near my house?

There are many more query patterns possible, and framework designers will need to predetermine exactly what wild card searches will be supported, as appropriate indexes may need to be created and/or other optimizations required.

Yahoo! supported both RESTful interfaces and JSON protocol requests, but any reliable protocol would do. It also supports returning a paged window of results, reducing interprocess messaging to just the number of statements required.

Yahoo! Lessons Learned

During the development of the Yahoo! Reputation Platform, the team wandered down many dark alleys and false paths. Presented below are some of warning signs and insights gained - they aren't intended as hard-and-fast rules, just friendly advice.

  • It is not practical to use prebuilt code blocks to build reputation models, because every context is different, so every model is also significantly different. Don't try to create a reputation scripting language. Certainly there are common abstractions, as represented in the graphical grammar, but those should not be confused with actual executing code. To get the desired customization, scale, and performance the reputation processes should be expressed directly in native code. The Yahoo! Reputation Platform expressed the reputation models directly in PHP. After the first few models were complete, common patterns were packaged and released as code libraries, which decreased the implementation time for each model.
  • Focus on building only on the core reputation framework itself…use existing toolkits for messaging, resource management, and database. No need to reinvent the wheel.
  • Go for the performance over slavishly copying the diagrams' inferred modularity. For example, even the Simple Accumulator process is probably best implemented primarily in the database process as a stored procedure. Many of the patterns work out to be read-modify-write, so the main alternatives are stored procedures or deferring the database modifications as long as possible given your reliability requirements.
  • Creating a common platform is necessary, but not sufficient, to get applications to share data. In practice it turned out that the problem of reconciling the entity identifiers between sites was a time-intensive task that often was deprioritized. Often merging two previously existing entity databases was not 100% automatic and required manual supervision. Even when the data was merged, it typically required each sharing application to modify existing user-facing application code, another expense. This latter problem can be somewhat mitigated in the short term by writing backward-compatible interfaces for legacy application code.

Your Mileage May Vary

Given the number of variations on reputation framework requirements and your application's technical environment, the two examples just presented represent extremes that don't exactly apply to your situation. Our advice is to design in favor of adaptability…a constraint we intentionally left off the choice list.

It took three separate tries to implement the Yahoo! Reputation Platform.

Yahoo! first tried to do it on the cheap, with a database vendor creating a request-reply, all database-procedure-based implementation. That attempt surfaced an unacceptable performance/reliability trade-off and was abandoned.

The second attempt taught us about premature reputation model compilation and optimization and that we could loosen the strongly-typed and compiled language requirement in order to make reputation model implementation more flexible and accessable to more programmers.

The third platform finally made it to deployment, and the lessons are reflected on Appx_A-Yahoo_Lessons_Learned . It is worth noting that though the platform delivers on the original requirements, the sharing requirement…listed as a primary driver for the project…is not yet in extensive use. Despite the repeated assertions by senior product management, the applications designers end up requiring orientation in the benefits of sharing their data as well as leveraging the shared reputations of other applications. Presently, only Customer Care looks at cross-property karma scores to help determine if an account that might otherwise be automatically suspended should get additional, high-touch, support instead.

Recommendations for all Reputation Frameworks

Reputation is a database. Reputation statements should be stored and indexed separately so that applications can continue to evolve new uses for the claims.

Though it is tempting to mix the reputation process code in with your application, don't do it! You will be changing the model over time to either fix bugs, achieve the results you were originally looking for, or to mitigate abuse…and this will be all but impossible unless reputation remains a distinct module.

Sources and Targets are foreign keys, and generally the reputation framework has little to no specific knowledge of the data objects indexed by those keys. Everything the reputation model needs to compute the claims should be passed in messages or remain directly accessible to each reputation process.

Discipline! The reputation framework manages nothing less than the code that sets the valuation of all user-generated and user-evaluated content in your application. As such, it deserves the effort of regular critical design and code reviews and full testing suites. Log and audit every input that is interesting, especially any claim overrides that are logged during operations. There have been many examples of employees manipulating reputation scores in return for status or favors.

appendix_a.txt · Last modified: 2009/12/03 09:04 by randy
www.chimeric.de Creative Commons License Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0