Differences

This shows you the differences between two versions of the page.

chapter_4 [2009/11/18 18:19]
randy
chapter_4 [2009/12/01 14:17] (current)
randy submitted for publisher review
Line 1: Line 1:
===== Common Reputation Models ===== ===== Common Reputation Models =====
-Now we're going to start putting our simple reputation building blocks from <html><a href="/doku.php?id=Chapter_3">Chapter_3</a>&nbsp;</html>to work. Let's look at some actual reputation models to understand how the claims, inputs, and processes described in the last chapter can be combined to model an target entity's reputation.+Now we're going to start putting our simple reputation building blocks from <html><a href="/doku.php?id=Chapter_3">Chapter_3</a>&nbsp;</html>to work. Let's look at some actual reputation models to understand how the claims, inputs, and processes described in the last chapter can be combined to model a target entity's reputation.
-In this chapter, we'll name and describe a number of simple and broadly-deployed reputation models, such as vote-to-promote, simple ratings, and points. You probably have some degree of familiarity with these patterns by simple virtue of being an active participant online. You see them all over the place -- they're the bread and butter of today's social Web. Later in the chapter, we'll show you how to combine these simple models and expand upon them to make real-world models.+In this chapter, we'll name and describe a number of simple and broadly-deployed reputation models, such as vote-to-promote, simple ratings, and points. You probably have some degree of familiarity with these patterns by simple virtue of being an active participant online. You see them all over the place-they're the bread and butter of today's social Web. Later in the chapter, we'll show you how to combine these simple models and expand upon them to make real-world models.
-Understanding how these simple models combine to form more complete ones will help you identify them when you see them in the wild. All of this will become important later in the book, as we start to design and architect of your own tailored reputation models.+Understanding how these simple models combine to form more complete ones will help you identify them when you see them in the wild. All of this will become important later in the book, as we start to design and architect your own tailored reputation models.
Line 16: Line 16:
These controls may take the form of explicit votes for a reputable entity, or they may be more subtle implicit indicators of quality (such as the ability to bookmark content or send a link to it to a friend). A count of the number of times these controls are accessed forms the initial input into the system; the model uses that count to tabulate the entities' reputations. These controls may take the form of explicit votes for a reputable entity, or they may be more subtle implicit indicators of quality (such as the ability to bookmark content or send a link to it to a friend). A count of the number of times these controls are accessed forms the initial input into the system; the model uses that count to tabulate the entities' reputations.
-In its simplest form, a favorites-and-flags model can be implemented as a simple counter. (When you start to combine them into more complex models, you'll probably need the additional flexibility of a reversible counter.)+In its simplest form, a favorites-and-flags model can be implemented as a simple counter. (<html><a href="#Figure_4-1">Figure_4-1</a>&nbsp;</html>) When you start to combine them into more complex models, you'll probably need the additional flexibility of a reversible counter.
-<html><a name="Figure_4-1"><center></html>// Figure_4-1: Favorites, flags, or send-to-a-friend models: count 'em up and keep track. //<html></center></a></html> +<html><a name="Figure_4-1"><center></html>// Figure_4-1: Favorites, flags, or send-to-a-friend models can be built with a Simple Counter process-count 'em up and keep score. //<html></center></a></html> 
-<html><center><img width="60%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Ch05-FlagsAndFavorites.png"/></center></html>+<html><center><img width="60%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Figure_4-1.png"/></center></html>
The favorites-and-flags model has three variants. The favorites-and-flags model has three variants.
Line 46: Line 46:
If you give your users options for expressing their opinion about something, you are giving them a vote. A very common use of the voting model (which we've referenced throughout this book) is to allow community members to vote on the usefulness, accuracy, or appeal of something. If you give your users options for expressing their opinion about something, you are giving them a vote. A very common use of the voting model (which we've referenced throughout this book) is to allow community members to vote on the usefulness, accuracy, or appeal of something.
-To differentiate from more open-ended voting schemes like vote-to-promote, it may help to think of these types of actions as “this-or-that” voting: choosing from the most attractive option within a bounded set of possibilities.+To differentiate from more open-ended voting schemes like vote-to-promote, it may help to think of these types of actions as “this-or-that” voting: choosing from the most attractive option within a bounded set of possibilities. (See <html><a href="#Figure_4-2">Figure_4-2</a>&nbsp;</html>.)
-It's often more convenient to store that reputation statement back as a part of the reputable entity that it applies to, making it easier, for example, to fetch and display a "Was this review helpful?" score. (See <html><a href="#Figure_2-7">Figure_2-7</a>&nbsp;</html>.)+It's often more convenient to store that reputation statement back as a part of the reputable entity that it applies to, making it easier, for example, to fetch and display a “Was this review helpful?score. (See <html><a href="#Figure_2-7">Figure_2-7</a>&nbsp;</html>.)
-<html><a name="Figure_4-2"><center></html>// Figure_4-2: A very simple voting model //<html></center></a></html> +<html><a name="Figure_4-2"><center></html>// Figure_4-2: Those Helpful Reviewscores that you see are often nothing more than a Simple Ratio. //<html></center></a></html> 
-<html><center><img width="60%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Ch05-SM_Voting.png"/></center></html>+<html><center><img width="60%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Figure_4-2.png"/></center></html>
=== Ratings === === Ratings ===
-When an application offers users the ability to express an explicit opinion about the quality of something, it typically employs a ratings model. There are a number of different scalar-value ratings: stars, bars, “HotOrNot,” or a 10-point scale. (We'll discuss how to choose from amongst the various types of ratings in <html><a href="/doku.php?id=Chapter_6#Chap_6-Choosing_Your_Inputs">Chap_6-Choosing_Your_Inputs</a>&nbsp;</html>.) In the ratings model, ratings are gathered from multiple individual users and rolled up as a community average score for that target.+When an application offers users the ability to express an explicit opinion about the quality of something, it typically employs a ratings model. (<html><a href="#Figure_4-3">Figure_4-3</a>&nbsp;</html>) There are a number of different scalar-value ratings: stars, bars, “HotOrNot,” or a 10-point scale. (We'll discuss how to choose from amongst the various types of ratings inputs in <html><a href="/doku.php?id=Chapter_6#Chap_6-Choosing_Your_Inputs">Chap_6-Choosing_Your_Inputs</a>&nbsp;</html>.) In the ratings model, ratings are gathered from multiple individual users and rolled up as a community average score for that target.
-<html><a name="Figure_4-3"><center></html>// Figure_4-3: Individual ratings contribute to a community average //<html></center></a></html> +<html><a name="Figure_4-3"><center></html>// Figure_4-3: Individual ratings contribute to a community average. //<html></center></a></html> 
-<html><center><img width="80%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Ch05-SM_Rating.png"/></center></html>+<html><center><img width="80%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Figure_4-3.png"/></center></html>
=== Reviews === === Reviews ===
-Some ratings are most effective when they travel together. More complex reputable entities frequently require more nuanced reputation models, and the ratings-and-review model allows users to express a variety of reactions to a target. While each rated facet could be stored and evaluated as its own specific reputation, semantically that wouldn't make much sense-it's the review in its entirety that is the primary unit of interest.+Some ratings are most effective when they travel together. More complex reputable entities frequently require more nuanced reputation models, and the ratings-and-review model, <html><a href="#Figure_4-4">Figure_4-4</a>&nbsp;</html>, allows users to express a variety of reactions to a target. While each rated facet could be stored and evaluated as its own specific reputation, semantically that wouldn't make much sense-it's the review in its entirety that is the primary unit of interest.
In the reviews model, a user gives a target a series of ratings and provides one or more freeform text opinions. Each individual facet of a review feeds into a community average. In the reviews model, a user gives a target a series of ratings and provides one or more freeform text opinions. Each individual facet of a review feeds into a community average.
-<html><a name="Figure_4-4"><center></html>// Figure_4-4: A full user review typically is made up of a number of ratings and some freeform text comments //<html></center></a></html> +<html><a name="Figure_4-4"><center></html>// Figure_4-4: A full user review typically is made up of a number of ratings and some free-form text comments. Those ratings with a numerical value can, of course, contribute to aggregate community averages as well. //<html></center></a></html> 
-<html><center><img width="80%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Ch05_SM_RatingsAndReview.png"/></center></html>+<html><center><img width="80%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Figure_4-4.png"/></center></html>
<html><a name='Chap_4-Points'></a></html> <html><a name='Chap_4-Points'></a></html>
=== Points === === Points ===
-For some applications, you may want a very specific and granular accounting of user activity on your site. The points model provides just such a capability. With points, your system counts up the hits, actions, and other activities that your users engage in and keeps a running sum of the awards.+For some applications, you may want a very specific and granular accounting of user activity on your site. The points model, <html><a href="#Figure_4-5">Figure_4-5</a>&nbsp;</html>, provides just such a capability. With points, your system counts up the hits, actions, and other activities that your users engage in and keeps a running sum of the awards.
-<html><a name="Figure_4-5"><center></html>// Figure_4-5: As a user engages in different activities, they are recorded, weighted, and tallied. //<html></center></a></html> +<html><a name="Figure_4-5"><center></html>// Figure_4-5: As a user engages in various activities, they are recorded, weighted, and tallied. //<html></center></a></html> 
-<html><center><img width="60%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Ch05_SM_Points.png"/></center></html>+<html><center><img width="60%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Figure_4-5.png"/></center></html>
This is a tricky model to get right. In particular, you face two dangers: This is a tricky model to get right. In particular, you face two dangers:
  - Tying inputs to point values almost forces a certain amount of transparency onto your system. It is hard to reward activities with points without also communicating to your users what those relative point values are. (See <html><a href="/doku.php?id=Chapter_4#Chap_4-Keep_Your_Barn_Door_Closed">Chap_4-Keep_Your_Barn_Door_Closed</a>&nbsp;</html>, below.)   - Tying inputs to point values almost forces a certain amount of transparency onto your system. It is hard to reward activities with points without also communicating to your users what those relative point values are. (See <html><a href="/doku.php?id=Chapter_4#Chap_4-Keep_Your_Barn_Door_Closed">Chap_4-Keep_Your_Barn_Door_Closed</a>&nbsp;</html>, below.)
-  - You risk unduly influencing certain behaviors over others: it's almost certain that some minority of your users (or -- in a success-disaster scenario -- the //majority// of your users ) will make points-based decisions about which actions they'll take.+  - You risk unduly influencing certain behaviors over others: it's almost certain that some minority of your users (or-in a success-disaster scenario-the //majority// of your users ) will make points-based decisions about which actions they'll take.
<note caution>There are significant differences between points awarded for reputation purposes and monetary points that you may dole out to users as currency. The two are frequently confounded, but reputation points should not be spendable. <note caution>There are significant differences between points awarded for reputation purposes and monetary points that you may dole out to users as currency. The two are frequently confounded, but reputation points should not be spendable.
Line 110: Line 110:
<html><a name='Chap_4-Robust_Karma'></a></html> <html><a name='Chap_4-Robust_Karma'></a></html>
== Robust Karma == == Robust Karma ==
-By itself, a participation-based karma score is inadequate to describe the value of a user's contributions to the community: we will caution time and again throughout the book that rewarding simple activity is an impoverished way to think about user karma. However, you probably don't want a karma score based solely on quality of contributions either. Under this circumstance, you may find your system rewarding //cautious// contributors -- ones who, out of a desire to keep their quality-ratings high--only contribute to “safe” topics, or -- once having attained a certain quality ranking -- decide to stop contributing to protect that ranking.+By itself, a participation-based karma score is inadequate to describe the value of a user's contributions to the community: we will caution time and again throughout the book that rewarding simple activity is an impoverished way to think about user karma. However, you probably don't want a karma score based solely on quality of contributions either. Under this circumstance, you may find your system rewarding //cautious// contributors-ones who, out of a desire to keep their quality-ratings high-only contribute to “safe” topics, or-once having attained a certain quality ranking-decide to stop contributing to protect that ranking.
-What you really want to do is to combine quality-karma and participation-karma scores into one score -- call it robust karma. The robust-karma score represents the //overall// value of a user's contributions: the quality component ensures some thought and care in the preparation of contributions, and the participation side ensures that the contributor is very active, that she's contributed recently, and (probably) that she's surpassed some minimal thresholds for user participation -- enough that you can reasonably separate the passionate, dedicated contributors from the fly-by post-then-flee crowd.+What you really want to do is to combine quality-karma and participation-karma scores into one score-call it robust karma. The robust-karma score represents the //overall// value of a user's contributions: the quality component ensures some thought and care in the preparation of contributions, and the participation side ensures that the contributor is very active, that she's contributed recently, and (probably) that she's surpassed some minimal thresholds for user participation-enough that you can reasonably separate the passionate, dedicated contributors from the fly-by post-then-flee crowd.
The weight you'll give to each component depends on the application. Robust-karma scores often are not displayed to users, but may be used instead for internal ranking or flagging, or as factors influencing search ranking; see <html><a href="/doku.php?id=Chapter_4#Chap_4-Keep_Your_Barn_Door_Closed">Chap_4-Keep_Your_Barn_Door_Closed</a>&nbsp;</html>, below, for common reasons for this secrecy. But even when karma scores are displayed, a robust-karma model has the advantage of encouraging users both to contribute the best stuff (as evaluated by their peers) and to do it often. The weight you'll give to each component depends on the application. Robust-karma scores often are not displayed to users, but may be used instead for internal ranking or flagging, or as factors influencing search ranking; see <html><a href="/doku.php?id=Chapter_4#Chap_4-Keep_Your_Barn_Door_Closed">Chap_4-Keep_Your_Barn_Door_Closed</a>&nbsp;</html>, below, for common reasons for this secrecy. But even when karma scores are displayed, a robust-karma model has the advantage of encouraging users both to contribute the best stuff (as evaluated by their peers) and to do it often.
Line 118: Line 118:
When negative factors are included in factoring robust-karma scores, it is particularly useful for customer care staff-both to highlight users who have become abusive or users whose contributions decrease the overall value of content on the site, and potentially to provide an increased level of service to proven-excellent users who become involved in a customer service procedure. A robust-karma model helps find the best of the best and the worst of the worst. When negative factors are included in factoring robust-karma scores, it is particularly useful for customer care staff-both to highlight users who have become abusive or users whose contributions decrease the overall value of content on the site, and potentially to provide an increased level of service to proven-excellent users who become involved in a customer service procedure. A robust-karma model helps find the best of the best and the worst of the worst.
-<html><a name="Figure_4-6"><center></html>// Figure_4-6: A robust-karma model combines multiple other karma scores, usually qualitative and quantitative. //<html></center></a></html> +<html><a name="Figure_4-6"><center></html>// Figure_4-6: A robust-karma model might combine multiple other karma scores-measuring, perhaps, not just a user's output (Participation) but their effectiveness (or Quality) as well. //<html></center></a></html> 
-<html><center><img width="60%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Ch05_SM_RobustKarma.png"/></center></html>+<html><center><img width="60%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Figure_4-6.png"/></center></html>
<html><a name='Chap_4-Combining_the_Simple_Models'></a></html> <html><a name='Chap_4-Combining_the_Simple_Models'></a></html>
Line 136: Line 136:
Eventually, a site based on a simple reputation model, such as the ratings-and-reviews model, is bound to become more complex. Probably the most common reason for increasing complexity is this progression: as an application becomes more successful, it becomes clear that some of the site's users produce higher-quality reviews. These quality contributions begin to significantly increase the value of the site to end users and to the site operator's bottom line. As a result, the site operator looks for ways to recognize these contributors, increase the search ranking value of their reviews, and generally provide incentives for this value-generating behavior. Adding a karma reputation model to the system is a common approach to reaching those goals. Eventually, a site based on a simple reputation model, such as the ratings-and-reviews model, is bound to become more complex. Probably the most common reason for increasing complexity is this progression: as an application becomes more successful, it becomes clear that some of the site's users produce higher-quality reviews. These quality contributions begin to significantly increase the value of the site to end users and to the site operator's bottom line. As a result, the site operator looks for ways to recognize these contributors, increase the search ranking value of their reviews, and generally provide incentives for this value-generating behavior. Adding a karma reputation model to the system is a common approach to reaching those goals.
-The simplest way to introduce a quality-karma score to a simple ratings-and-reviews reputation system is to introduce a "Was this helpful?" feedback mechanism, that visiting readers may use to evaluate each review.+The simplest way to introduce a quality-karma score to a simple ratings-and-reviews reputation system is to introduce a “Was this helpful?feedback mechanism that visiting readers may use to evaluate each review.
-<html><center><img width="80%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Ch05-HelpfulYesNo.png"/></center></html> +<html><center><img width="80%" src="http://buildingreputation.com/lib/exe/fetch.php?media=InformalFigure_4-6.5.png"/></center></html>
- +
-<html><a name="Figure_4-7"><center></html>// Figure_4-7: In this two-tiered system, users write reviews and other users review those reviews. The outcome is a lot of useful reputation information about the target (here, Dessert Hut) and all the people who review it.In label"Was This Helpful" score (not in process title), lowercase This and Helpful (NOT ON THIS PASS. CHANGES TO IMAGES LATER. - {BG}) //<html></center></a></html> +
-<html><center><img width="100%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Ch05_HelpfulRandRWithKarma.png"/></center></html>+
The example in <html><a href="#Figure_4-7">Figure_4-7</a>&nbsp;</html>is a hypothetical product reputation model, and the reviews focus on 5-star ratings in the categories “overall” , “service” , and “price.” These specifics are for illustration only and are not critical to the design. This model could just as well be used with thumb ratings and any arbitrary categories like “sound quality” or “texture.” The example in <html><a href="#Figure_4-7">Figure_4-7</a>&nbsp;</html>is a hypothetical product reputation model, and the reviews focus on 5-star ratings in the categories “overall” , “service” , and “price.” These specifics are for illustration only and are not critical to the design. This model could just as well be used with thumb ratings and any arbitrary categories like “sound quality” or “texture.”
 +
 +<html><a name="Figure_4-7"><center></html>// Figure_4-7: In this two-tiered system, users write reviews and other users review those reviews. The outcome is a lot of useful reputation information about the entity in question (here, Dessert Hut) and all the people who review it. In labelWas This Helpfulscore (not in process title), lowercase This and Helpful (DONE. - {BG}) //<html></center></a></html>
 +<html><center><img width="100%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Figure_4-7.png"/></center></html>
The combined ratings-and-reviews with karma model has one compound input: the review and the was-this-helpful vote. From these inputs, the community rating averages, the ''WasThisHelpful'' ratio, and the reviewer quality-karma rating are generated on the fly. Pay careful attention to the sources and targets of the inputs of this model-they are not the same users, nor are their ratings targeted at the same entities. The combined ratings-and-reviews with karma model has one compound input: the review and the was-this-helpful vote. From these inputs, the community rating averages, the ''WasThisHelpful'' ratio, and the reviewer quality-karma rating are generated on the fly. Pay careful attention to the sources and targets of the inputs of this model-they are not the same users, nor are their ratings targeted at the same entities.
Line 165: Line 165:
  - After a simple point accumulation model, our reviewer quality karma model is probably the simplest karma model possible: track the ratio of total was-this-helpful votes for all the reviews that a user has written to the total number of votes received. We've labeled this a custom ratio because we assume that the application will be programmed to include certain features in the calculation such as requiring a minimum number of votes before considering any display of karma to a user. Likewise, it is typical to create a nonlinear scale when grouping users into karma display formats, such as badges like “top 100 reviewer.” See the <html><a href="/doku.php?id=Chapter_4#Chap_4-eBay_Merchant_Feedback_Karma">Chap_4-eBay_Merchant_Feedback_Karma</a>&nbsp;</html>model and <html><a href="/doku.php?id=Chapter_7">Chapter_7</a>&nbsp;</html>for more on display patterns for karma.   - After a simple point accumulation model, our reviewer quality karma model is probably the simplest karma model possible: track the ratio of total was-this-helpful votes for all the reviews that a user has written to the total number of votes received. We've labeled this a custom ratio because we assume that the application will be programmed to include certain features in the calculation such as requiring a minimum number of votes before considering any display of karma to a user. Likewise, it is typical to create a nonlinear scale when grouping users into karma display formats, such as badges like “top 100 reviewer.” See the <html><a href="/doku.php?id=Chapter_4#Chap_4-eBay_Merchant_Feedback_Karma">Chap_4-eBay_Merchant_Feedback_Karma</a>&nbsp;</html>model and <html><a href="/doku.php?id=Chapter_7">Chapter_7</a>&nbsp;</html>for more on display patterns for karma.
Karma models, especially public karma models, are subject to massive abuse by users interested in personal status or commercial gain. For that reason, this process must be reversible. Karma models, especially public karma models, are subject to massive abuse by users interested in personal status or commercial gain. For that reason, this process must be reversible.
-Now that we have a community-generated quality karma claim for each user (at least those who have written a review noteworthy enough to invite helpful votes), you may notice that this model doesn't use that score as an input or weight in calculating other scores. This configuration is a reminder that reputation models all exist within an application context -- therefore the most appropriate use for this score will be determined by your application's needs.+Now that we have a community-generated quality karma claim for each user (at least those who have written a review noteworthy enough to invite helpful votes), you may notice that this model doesn't use that score as an input or weight in calculating other scores. This configuration is a reminder that reputation models all exist within an application context-therefore the most appropriate use for this score will be determined by your application's needs.
Perhaps the you will keep the quality-karma score as a corporate reputation, helping to determine which users should get escalating customer support. Perhaps the score will be public, displayed next to every one of a user's reviews as a status symbol for all to see. It might even be personal, shared only with each reviewer, so that reviewers can see what the overall community thinks of their contributions. Each of these choices has different ramifications, which we discuss in <html><a href="/doku.php?id=Chapter_6">Chapter_6</a>&nbsp;</html>in detail. Perhaps the you will keep the quality-karma score as a corporate reputation, helping to determine which users should get escalating customer support. Perhaps the score will be public, displayed next to every one of a user's reviews as a status symbol for all to see. It might even be personal, shared only with each reviewer, so that reviewers can see what the overall community thinks of their contributions. Each of these choices has different ramifications, which we discuss in <html><a href="/doku.php?id=Chapter_6">Chapter_6</a>&nbsp;</html>in detail.
Line 178: Line 178:
</note> </note>
-We have simplified the model for illustration, specifically by omitting the processing for the requirement that only buyer feedback and Detailed Seller Ratings (DSR) //provided over the previous 12 months// are considered when calculating the positive feedback ratio, DSR community averages, and--by extension--power seller status. Also, eBay reports user feedback counters for the last month and quarter, which we are omitting here for the sake of clarity. Abuse mitigation features, which are not publicly available, are also excluded.+We have simplified the model for illustration, specifically by omitting the processing for the requirement that only buyer feedback and Detailed Seller Ratings (DSR) //provided over the previous 12 months// are considered when calculating the positive feedback ratio, DSR community averages, and-by extension-power seller status. Also, eBay reports user feedback counters for the last month and quarter, which we are omitting here for the sake of clarity. Abuse mitigation features, which are not publicly available, are also excluded.
<html><a name="Figure_4-8"><center></html>// Figure_4-8: This simplified diagram shows how buyers influence a seller's karma scores on eBay. Though the specifics are unique to eBay, the pattern is common to many karma systems. //<html></center></a></html> <html><a name="Figure_4-8"><center></html>// Figure_4-8: This simplified diagram shows how buyers influence a seller's karma scores on eBay. Though the specifics are unique to eBay, the pattern is common to many karma systems. //<html></center></a></html>
-<html><center><img width="100%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Ch05_EbayMerchantFeedback.png"/></center></html>+<html><center><img width="100%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Figure_4-8.png"/></center></html>
<html><a href="#Figure_4-8">Figure_4-8</a>&nbsp;</html>illustrates the seller feedback karma reputation model, which is made out of typical model components: two compound buyer input claims-seller feedback and detailed seller ratings-and several roll-ups of the seller's karma: community feedback ratings (a counter), feedback level (a named level), positive feedback percentage (a ratio), and the power seller rating (a label). <html><a href="#Figure_4-8">Figure_4-8</a>&nbsp;</html>illustrates the seller feedback karma reputation model, which is made out of typical model components: two compound buyer input claims-seller feedback and detailed seller ratings-and several roll-ups of the seller's karma: community feedback ratings (a counter), feedback level (a named level), positive feedback percentage (a ratio), and the power seller rating (a label).
Line 206: Line 206:
</note> </note>
-This score is an input into the power seller rating, which is a highly-coveted rating to achieve. This means that each and every individual positive and negative rating given on eBay is a critical one--it can mean the difference for a seller between acquiring the coveted power seller status, or //not//.+This score is an input into the power seller rating, which is a highly-coveted rating to achieve. This means that each and every individual positive and negative rating given on eBay is a critical one-it can mean the difference for a seller between acquiring the coveted power seller status, or //not//.
  - The Detailed Seller Ratings community averages are simple reversible averages for each of the four ratings categories: “item as described,” “communications,” “shipping time,” and “shipping and handling charges.” There is a limit on how often a buyer may contribute DSRs.   - The Detailed Seller Ratings community averages are simple reversible averages for each of the four ratings categories: “item as described,” “communications,” “shipping time,” and “shipping and handling charges.” There is a limit on how often a buyer may contribute DSRs.
EBay only recently added these categories as a new reputation model because including them as factors in the overall seller feedback ratings diluted the overall quality of seller and buyer feedback. Sellers could end up in disproportionate trouble just because of a bad shipping company or a delivery that took a long time to reach a remote location. Likewise, buyers were bidding low prices only to end up feeling gouged by shipping and handling charges. EBay only recently added these categories as a new reputation model because including them as factors in the overall seller feedback ratings diluted the overall quality of seller and buyer feedback. Sellers could end up in disproportionate trouble just because of a bad shipping company or a delivery that took a long time to reach a remote location. Likewise, buyers were bidding low prices only to end up feeling gouged by shipping and handling charges.
Fine-grained feedback allows one-off small problems to be averaged out across the DSR community averages instead of being translated into red-star negative scores that poison trust overall. Fine-grained feedback for sellers is also actionable by them and motivates them to improve, since these DSR scores make up half of the power seller rating. Fine-grained feedback allows one-off small problems to be averaged out across the DSR community averages instead of being translated into red-star negative scores that poison trust overall. Fine-grained feedback for sellers is also actionable by them and motivates them to improve, since these DSR scores make up half of the power seller rating.
  - The power seller rating, appearing next to the seller's ID, is a prestigious label that signals the highest level of trust. It includes several factors external to this model, but two critical components are the positive feedback percentage, which must be at least 98%, and the DSR community averages, which each must be at least 4.5 stars (around 90% positive). Interestingly, the DSR scores are more flexible than the feedback average, which tilts the rating toward overall evaluation of the transaction rather than the related details.   - The power seller rating, appearing next to the seller's ID, is a prestigious label that signals the highest level of trust. It includes several factors external to this model, but two critical components are the positive feedback percentage, which must be at least 98%, and the DSR community averages, which each must be at least 4.5 stars (around 90% positive). Interestingly, the DSR scores are more flexible than the feedback average, which tilts the rating toward overall evaluation of the transaction rather than the related details.
-Though the context for the buyer's claims is a single transaction or history of transactions, the context for the aggregate reputations that are generated is //trust in the eBay marketplace itself//. If the buyers can't trust the sellers to deliver against their promises, eBay cannot do business. When considering the roll-ups, we transform the single-transaction claims into trust in the seller, and--by extension--that same trust rolls up into eBay. This chain of trust is so integral and critical to eBay's continued success that they must continuously update the marketplace's interface and reputation systems.+Though the context for the buyer's claims is a single transaction or history of transactions, the context for the aggregate reputations that are generated is //trust in the eBay marketplace itself//. If the buyers can't trust the sellers to deliver against their promises, eBay cannot do business. When considering the roll-ups, we transform the single-transaction claims into trust in the seller, and-by extension-that same trust rolls up into eBay. This chain of trust is so integral and critical to eBay's continued success that they must continuously update the marketplace's interface and reputation systems.
=== Flickr Interestingness Scores for Content Quality === === Flickr Interestingness Scores for Content Quality ===
-The popular online photo service Flickr uses reputation to qualify new user submissions and track user behavior that violates Flickr's terms of service. Most notably, Flickr uses a completely custom reputation model called "interestingness" for identifying the highest-quality photographs submitted from the millions uploaded every week. Flickr uses that reputation score to rank photos by user and, in searches, by tag.+The popular online photo service Flickr uses reputation to qualify new user submissions and track user behavior that violates Flickr's terms of service. Most notably, Flickr uses a completely custom reputation model called “interestingness” for identifying the highest-quality photographs submitted from the millions uploaded every week. Flickr uses that reputation score to rank photos by user and, in searches, by tag.
-Interestingness is also the key to Flickr's [[http://flickr.com/explore|"Explore" page]], which displays a daily calendar of the photos with the highest interestingness ratings, and users may use a graphical calendar to look back at the worthy photographs from any previous day. It's like a daily leaderboard for newly-uploaded content.+Interestingness is also the key to Flickr's [[http://flickr.com/explore|“Explore” page]], which displays a daily calendar of the photos with the highest interestingness ratings, and users may use a graphical calendar to look back at the worthy photographs from any previous day. It's like a daily leaderboard for newly-uploaded content.
<note tip>The version of Flickr interestingness that we are presenting here is an abstraction based on several different pieces of evidence: the U.S. patent application (Number 2006/0242139 A1) filed by Flickr; comments that that Flickr staff has given on their own message boards; observations by power users in the community; and our own experience in building such reputation systems. <note tip>The version of Flickr interestingness that we are presenting here is an abstraction based on several different pieces of evidence: the U.S. patent application (Number 2006/0242139 A1) filed by Flickr; comments that that Flickr staff has given on their own message boards; observations by power users in the community; and our own experience in building such reputation systems.
-As with all the models we describe in this book, we've taken some liberties to simplify the model for presentation--specifically, the patent mentions various weights and ceilings for the calculations without actually prescribing any particular values for them. We make no attempt to guess at what these values might be. Likewise, we have left out the specific calculations.+As with all the models we describe in this book, we've taken some liberties to simplify the model for presentation-specifically, the patent mentions various weights and ceilings for the calculations without actually prescribing any particular values for them. We make no attempt to guess at what these values might be. Likewise, we have left out the specific calculations.
-We do, however, offer two pieces of advice for anyone building similar systems: there is no substitute for gathering historical data when you are deciding how to clip and weight your calculations, and--even if you get your initial settings correct--you will need to adjust them over time to adapt to the use patterns that will emerge as the direct result of implementing reputation. (See <html><a href="/doku.php?id=Chapter_9#Chap_9-Emergent_Effects_and_Defects">Chap_9-Emergent_Effects_and_Defects</a>&nbsp;</html>)+We do, however, offer two pieces of advice for anyone building similar systems: there is no substitute for gathering historical data when you are deciding how to clip and weight your calculations, and-even if you get your initial settings correct-you will need to adjust them over time to adapt to the use patterns that will emerge as the direct result of implementing reputation. (See <html><a href="/doku.php?id=Chapter_9#Chap_9-Emergent_Effects_and_Defects">Chap_9-Emergent_Effects_and_Defects</a>&nbsp;</html>)
</note> </note>
-<html><a name="Figure_4-9"><center></html>// Figure_4-9: Interestingness ratings are used in several places on the Flickr site, but most noticeably on the "Explore" page, a daily calendar of photos selected using this content reputation model. //<html></center></a></html> +<html><a name="Figure_4-9"><center></html>// Figure_4-9: Interestingness ratings are used in several places on the Flickr site, but most noticeably on the Explorepage, a daily calendar of photos selected using this content reputation model. //<html></center></a></html> 
-<html><center><img width="100%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Ch05_FlickrInterestingness.png"/></center></html>+<html><center><img width="100%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Figure_4-9.png"/></center></html>
<html><a href="#Figure_4-9">Figure_4-9</a>&nbsp;</html>has two primary outputs: ''photo interestingness'' and ''interesting photographer karma'' , and everything else feeds into those two key claims. <html><a href="#Figure_4-9">Figure_4-9</a>&nbsp;</html>has two primary outputs: ''photo interestingness'' and ''interesting photographer karma'' , and everything else feeds into those two key claims.
Line 240: Line 240:
    * A viewer can attach a note to the photo by adding a rectangle over a region of the photo and typing a short note.     * A viewer can attach a note to the photo by adding a rectangle over a region of the photo and typing a short note.
    * When a viewer comments on a photo, that comment is displayed for all other viewers to see. The first comment is usually the most important, because it encourages other viewers to join the conversation. We don't know whether Flickr weighs the first comment more heavily subsequent ones. (Though that is certainly common practice in some reputation models.)     * When a viewer comments on a photo, that comment is displayed for all other viewers to see. The first comment is usually the most important, because it encourages other viewers to join the conversation. We don't know whether Flickr weighs the first comment more heavily subsequent ones. (Though that is certainly common practice in some reputation models.)
-    * By clicking the “Add to Favorites” icon, a viewer not only endorses a photo but shares that endorsement -- the photo now appears in the viewer's profile, on his or her “My Favorites” page. +    * By clicking the “Add to Favorites” icon, a viewer not only endorses a photo but shares that endorsement-the photo now appears in the viewer's profile, on his or her “My Favorites” page. 
-    * If a viewer downloads the photo (depending on a photo's privacy settings, image downloads are available in various sizes), that is also counted as a viewer activity. (Again, we don't know for sure, but it would be smart on Flickr's part to count multiple repeat downloads as //only one action//, lest they risk creating a backdoor to gaming shenanigans.)+    * If a viewer downloads the photo (depending on a photo's privacy settings, image downloads are available in various sizes), that is also counted as a viewer activity. (Again, we don't know for sure, but it would be smart on Flickr's part to count multiple repeat downloads as //only one action//, lest they risk creating a back door to attention-gaming shenanigans.)
    * Lastly, the viewer can click “Send to Friend,” creating an email with a link to the photo. If the viewer addresses the message to multiple users or even a list, this action could be considered republishing. However, applications generally can't distinguish a list address from an individual person's address, so for reputation purposes we assume that the addressee is always an individual.     * Lastly, the viewer can click “Send to Friend,” creating an email with a link to the photo. If the viewer addresses the message to multiple users or even a list, this action could be considered republishing. However, applications generally can't distinguish a list address from an individual person's address, so for reputation purposes we assume that the addressee is always an individual.
-  - //Tagging// is the action of adding short text strings describing the photo for categorization. Flickr tags are similar pre-generated categories, but they exist in a folksonomy: whatever tags users apply to a photo, that's what the photo is about. Common tags include 2009, me, Randy, Bryce, Fluffy, and cameraphone, along with the expected descriptive categories of wedding, dog, tree, landscape, purple, tall, and irony-which sometimes means “made of iron!” +  - //Tagging// is the action of adding short text strings describing the photo for categorization. Flickr tags are similar pre-generated categories, but they exist in a folksonomy: whatever tags users apply to a photo, that's what the photo is about. Common tags include ''2009'' , ''me'' , ''Randy'' , ''Bryce'' , ''Fluffy'' , and ''cameraphone'' , along with the expected descriptive categories of ''wedding'' , ''dog'' , ''tree'' , ''landscape'' , ''purple'' , ''tall'' , and ''irony'' -which sometimes means “made of iron!”
Tagging gets special treatment in a reputation model because users must apply extra effort to tag an object, and determining whether one tag is more likely to be accurate than another requires complicated computation. Likewise, certain tags, though popular, should not be considered for reputation purposes at all. Tags have their own quantitative contribution to interestingness, but they also are considered viewer activities, so the input is split into both paths. Tagging gets special treatment in a reputation model because users must apply extra effort to tag an object, and determining whether one tag is more likely to be accurate than another requires complicated computation. Likewise, certain tags, though popular, should not be considered for reputation purposes at all. Tags have their own quantitative contribution to interestingness, but they also are considered viewer activities, so the input is split into both paths.
  - Sadly, many popular photographs turn out to be pornographic or in violation of Flickr's terms of service.   - Sadly, many popular photographs turn out to be pornographic or in violation of Flickr's terms of service.
-<note tip>On many sites -- if left untended -- porn tends to quickly generate a high quality reputation score. Remember, “quality” as we're discussing it is, to some degree, a measure of attention. Nothing garners attention like appealing to prurient interests.+<note tip>On many sites-if left untended-porn tends to quickly generate a high quality reputation score. Remember, “quality” as we're discussing it is, to some degree, a measure of attention. Nothing garners attention like appealing to prurient interests.
-The smart reputation designer can, in fact, leverage this fact, and build a corporate-user “porn probability” reputation that identifies content with a high (or //too//-high) velocity of attention and puts it in a prioritized queue for human agents to review.+The smart reputation designer can, in fact, leverage this unfortunate truth. Build a corporate-user “porn probability” reputation into your system-one that identifies content with a high (or //too//-high) velocity of attention and puts it in a prioritized queue for human agents to review.
</note> </note>
-//Flagging// is the process by which users mark content as inappropriate for the service. This is a negative reputation vote: by tagging a photo as abusive, the user is saying “this doesn't belong here.” This strong action should decrease the interestingness score //fast//-- faster, in fact, than the other inputs can raise it.+//Flagging// is the process by which users mark content as inappropriate for the service. This is a negative reputation vote: by tagging a photo as abusive, the user is saying “this doesn't belong here.” This strong action should decrease the interestingness score //fast//-faster, in fact, than the other inputs can raise it.
  - //Republishing// actions represent a user's decision to increase the audience for a photo by either adding it to a Flickr group or embedding it in a web page. Users can accomplish either by using the blog publishing tools in Flickr's interface or by copying and pasting an HTML snippet that the application provides. Flickr's patent doesn't specifically say that these two actions are treated similarly, but it seems reasonable to do so.   - //Republishing// actions represent a user's decision to increase the audience for a photo by either adding it to a Flickr group or embedding it in a web page. Users can accomplish either by using the blog publishing tools in Flickr's interface or by copying and pasting an HTML snippet that the application provides. Flickr's patent doesn't specifically say that these two actions are treated similarly, but it seems reasonable to do so.
Generally, four things determine a Flickr photo's interestingness (represented by the four parallel paths in <html><a href="#Figure_4-9">Figure_4-9</a>&nbsp;</html>): the viewer activity score, which represents the effect of viewers taking a specific action on a photo; tag relatedness, which represents a tag's similarity to others associated with other tagged photos; the negative feedback adjustment, which reflects reasons to downgrade or disqualify the tag; and group weighting, which has an early positive effect on reputation with the first few events. Generally, four things determine a Flickr photo's interestingness (represented by the four parallel paths in <html><a href="#Figure_4-9">Figure_4-9</a>&nbsp;</html>): the viewer activity score, which represents the effect of viewers taking a specific action on a photo; tag relatedness, which represents a tag's similarity to others associated with other tagged photos; the negative feedback adjustment, which reflects reasons to downgrade or disqualify the tag; and group weighting, which has an early positive effect on reputation with the first few events.
Line 269: Line 269:
<blockquote Flickr, U.S. Patent Application No. 2006/0242139 A1> <blockquote Flickr, U.S. Patent Application No. 2006/0242139 A1>
-[0032] As part of the relatedness computation, the statistics engine may employ a statistical clustering analysis known in the art to determine the statistical proximity between metadata (e.g., tags), and to group the metadata and associated media objects according to corresponding cluster. For example, out of 10,000 images tagged with the word "Vancouver," one statistical cluster within a threshold proximity level may include images also tagged with "Canada" and "British Columbia." Another statistical cluster within the threshold proximity may instead be tagged with "Washington" and "space needle" along with "Vancouver." Clustering analysis allows the statistics engine to associate "Vancouver" with both the "Vancouver-Canada" cluster and the "Vancouver-Washington" cluster. The media server may provide for display to the user the two sets of related tags to indicate they belong to different clusters corresponding to different subject matter areas, for example.+[0032] As part of the relatedness computation, the statistics engine may employ a statistical clustering analysis known in the art to determine the statistical proximity between metadata (e.g., tags), and to group the metadata and associated media objects according to corresponding cluster. For example, out of 10,000 images tagged with the word “Vancouver,one statistical cluster within a threshold proximity level may include images also tagged with “Canada” and “British Columbia.Another statistical cluster within the threshold proximity may instead be tagged with “Washington” and “space needle” along with “Vancouver.Clustering analysis allows the statistics engine to associate “Vancouver” with both the “Vancouver-Canada” cluster and the “Vancouver-Washington” cluster. The media server may provide for display to the user the two sets of related tags to indicate they belong to different clusters corresponding to different subject matter areas, for example.
</blockquote> </blockquote>
This is a good example of a black-box process that may be calculated outside of the formal reputation system. Such processes are often housed on optimized machines or are run continuously on data samples in order to give best-effort results in real time. This is a good example of a black-box process that may be calculated outside of the formal reputation system. Such processes are often housed on optimized machines or are run continuously on data samples in order to give best-effort results in real time.
For our model, we assume that the output will be a normalized score from ''0.0'' (no confidence) to ''1.0'' (high confidence) representing how likely the tag is related to the content. The simple average of all the scores for the tags on this photo are stored in the reputation system so that it can be used to recalculate photo interestingness as needed. For our model, we assume that the output will be a normalized score from ''0.0'' (no confidence) to ''1.0'' (high confidence) representing how likely the tag is related to the content. The simple average of all the scores for the tags on this photo are stored in the reputation system so that it can be used to recalculate photo interestingness as needed.
-  - The negative feedback path determines the effects of flagging a photo as abusive content. Flickr documentation is nearly nonexistent on this topic (for good reason -- see <html><a href="/doku.php?id=Chapter_4#Chap_4-Keep_Your_Barn_Door_Closed">Chap_4-Keep_Your_Barn_Door_Closed</a>&nbsp;</html>), but it seems reasonable to assume that even a small number of negative feedback events should be enough to nullify most, if not all, of a photo's interestingness score.+  - The negative feedback path determines the effects of flagging a photo as abusive content. Flickr documentation is nearly nonexistent on this topic (for good reason-see <html><a href="/doku.php?id=Chapter_4#Chap_4-Keep_Your_Barn_Door_Closed">Chap_4-Keep_Your_Barn_Door_Closed</a>&nbsp;</html>), but it seems reasonable to assume that even a small number of negative feedback events should be enough to nullify most, if not all, of a photo's interestingness score.
For illustration, let's say that it would only take five abuse reports to do the most damage possible to a photo's reputation. Using this math, each abuse report event would be worth ''0.2'' . Negative feedback can be thought of as a reversible accumulator with a maximum value of ''1.0'' . For illustration, let's say that it would only take five abuse reports to do the most damage possible to a photo's reputation. Using this math, each abuse report event would be worth ''0.2'' . Negative feedback can be thought of as a reversible accumulator with a maximum value of ''1.0'' .
<note tip>Note that this model doesn't account for abuse by users ganging up on a photo and flagging it as abusive when it is not. (See <html><a href="/doku.php?id=Chapter_8#Chap_8-Abuse_Reporter_Karma">Chap_8-Abuse_Reporter_Karma</a>&nbsp;</html>) That is a different reputation model, which we illustrate in detail in <html><a href="/doku.php?id=Chapter_10">Chapter_10</a>&nbsp;</html>: Yahoo! Answers Community Abuse Reporting. <note tip>Note that this model doesn't account for abuse by users ganging up on a photo and flagging it as abusive when it is not. (See <html><a href="/doku.php?id=Chapter_8#Chap_8-Abuse_Reporter_Karma">Chap_8-Abuse_Reporter_Karma</a>&nbsp;</html>) That is a different reputation model, which we illustrate in detail in <html><a href="/doku.php?id=Chapter_10">Chapter_10</a>&nbsp;</html>: Yahoo! Answers Community Abuse Reporting.
Line 303: Line 303:
As a business owner on today's Web, probably the greatest thing about social media is that the users themselves create the media from which you, the site operator, capture value. This means, however, that the quality of your site is directly related to the quality of the content created by your users. As a business owner on today's Web, probably the greatest thing about social media is that the users themselves create the media from which you, the site operator, capture value. This means, however, that the quality of your site is directly related to the quality of the content created by your users.
-This can present problems. Sure, the content is cheap--but you usually get what you pay for, and you will probably need to pay more to improve the quality. Additionally, some users have a different set of motivations than you might prefer.+This can present problems. Sure, the content is cheap-but you usually get what you pay for, and you will probably need to pay more to improve the quality. Additionally, some users have a different set of motivations than you might prefer.
We offer design advice to mitigate potential problems with social collaboration, and suggestions for specific nontechnical solutions. We offer design advice to mitigate potential problems with social collaboration, and suggestions for specific nontechnical solutions.
Line 326: Line 326:
In effect, the seller //can't make the order right// with the customer without refunding the purchase price in a timely manner. This puts them out-of-pocket for the price of the goods along with the hassle of trying to recover the money from the drop-shipper. In effect, the seller //can't make the order right// with the customer without refunding the purchase price in a timely manner. This puts them out-of-pocket for the price of the goods along with the hassle of trying to recover the money from the drop-shipper.
-But a simple refund alone sometimes isn't enough for the buyer! No, depending on the amount of perceived hassle and effort this transaction has cost them, they are still likely to rate the transaction negatively overall. (And rightfully so -- once it's become evident that a seller is working through a drop-shipper, many of their excuses and delays start to ring very hollow.) So a seller may have, at this point, outlayed a lot of their own time and money to rectify a bad transaction only to //still// suffer the penalties of a red star.+But a simple refund alone sometimes isn't enough for the buyer! No, depending on the amount of perceived hassle and effort this transaction has cost them, they are still likely to rate the transaction negatively overall. (And rightfully so-once it's become evident that a seller is working through a drop-shipper, many of their excuses and delays start to ring very hollow.) So a seller may have, at this point, outlayed a lot of their own time and money to rectify a bad transaction only to //still// suffer the penalties of a red star.
-What option does the seller have left to maintain their positive reputation? You guessed it -- a payoff. Not only will a concerned seller eat the price of the goods -- and any shipping involved -- but they will also pay an additional //cash bounty// (typically up to $20.00) to get buyers to flip a red star to green.+What option does the seller have left to maintain their positive reputation? You guessed it-a payoff. Not only will a concerned seller eat the price of the goods-and any shipping involved-but they will also pay an additional //cash bounty// (typically up to $20.00) to get buyers to flip a red star to green.
What is the cost of clearing negative feedback on drop-shipped goods? The cost of the item + $20.00 + lost time in negotiating with the buyer. That's the cost that reputation imposes on drop-shipping on eBay. What is the cost of clearing negative feedback on drop-shipped goods? The cost of the item + $20.00 + lost time in negotiating with the buyer. That's the cost that reputation imposes on drop-shipping on eBay.
Line 347: Line 347:
Equally bad, however, is divulging //too much// detail about your reputation system to the community. And more site designers probably make this mistake, especially in the early stages of deploying the system and growing the community. As an example, consider the highly specific breakdown of actions on the Yahoo! Answers site, and the points rewarded for each (See <html><a href="#Figure_4-10">Figure_4-10</a>&nbsp;</html>). Equally bad, however, is divulging //too much// detail about your reputation system to the community. And more site designers probably make this mistake, especially in the early stages of deploying the system and growing the community. As an example, consider the highly specific breakdown of actions on the Yahoo! Answers site, and the points rewarded for each (See <html><a href="#Figure_4-10">Figure_4-10</a>&nbsp;</html>).
-<html><a name="Figure_4-10"><center></html>// Figure_4-10: How to succeed at Yahoo! Answers //<html></center></a></html> +<html><a name="Figure_4-10"><center></html>// Figure_4-10: How to succeed at Yahoo! Answers? The site courteously provides you with a scorecard. //<html></center></a></html> 
-<html><center><img width="80%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Ch05-AnswersVerbose.png"/></center></html>+<html><center><img width="80%" src="http://buildingreputation.com/lib/exe/fetch.php?media=Figure_4-10.png"/></center></html>
Why might this breakdown be a mistake? For a number of reasons. Assigning overt point values to specific actions goes beyond //enhancing// the user experience and starts to directly influence it. Arguably, it may tip right over into the realm of //dictating// user behavior, which generally is frowned upon. Why might this breakdown be a mistake? For a number of reasons. Assigning overt point values to specific actions goes beyond //enhancing// the user experience and starts to directly influence it. Arguably, it may tip right over into the realm of //dictating// user behavior, which generally is frowned upon.
Line 385: Line 385:
==== Reputation from Theory to Practice ==== ==== Reputation from Theory to Practice ====
-The first half of this book focused on theory: on understanding reputation systems through defining the key concepts, creating a visual grammar for reputation systems, providing a technical description of an actual execution environment, defining a set of key building blocks that we used to construct the most common simple models, and finally some detailed models for popular complex reputation in the wild.+The first section of this book (Chapters 1-4) was focused on reputation //theory//: 
 + 
 +  * Understanding reputation systems through defining the key concepts 
 +  * Defining a visual grammar for reputation systems 
 +  * Creating a set of key building blocks and using them to describe simple reputation models 
 +  * Using it all to illuminate popular complex reputation systems found in the wild 
 +Along the way, we sprinkled in practitioner's tips to share what we've learned from existing reputation systems to help you understand what could, and already has, gone wrong.
-Along the way, we sprinkled in practitioner's tips to share what we've learned from existing reputation systems. Now you're prepared for the second half of the book: Applying the theory to a specific application-yours.+Now you're prepared for the second section of the book: applying this theory to a specific application-yours. <html><a href="/doku.php?id=Chapter_5">Chapter_5</a>&nbsp;</html>starts the project off with three basic questions about your application design. In haste, many projects skip over one or more of these critical considerations-the results are often very costly.
chapter_4.txt · Last modified: 2009/12/01 14:17 by randy
www.chimeric.de Creative Commons License Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0