BWRS Peer Critique - Engage!
We've received great and very positive reviews from others as well, but Eric's the first of our peers to take up the challenge of wanting more from our work. First a little about him from his bio:
Eric Goldman is an Associate Professor of Law at Santa Clara University School of Law. He also directs the school's High Tech Law Institute … [his] research focuses on Internet law, intellectual property, marketing, and the legal and social implications of new communication technologies …He also has some great presentations online, especially a recent one about Regulating Reputation Systems [video]. It's great work that has already influenced our thinking and even our recent public presentations. Too bad we didn't even know about Eric or his work before completing the book. This lack of a coherent way to discuss reputation systems was one of the reasons we started this effort. But, we're getting ahead of ourselves…
Hopefully, if you're still interested in reading this far, you've read Eric's review . Go ahead, we'll wait—this response assumes you have, so we don't have to quote a lot of context. Bryce and Randy each have thoughts to share about the issues raised, so we'll call out our responses by name below.
"…a debate worth having"
Randy: Thank you for taking the time to write such a detailed critique. Before discussing the critical points, I really appreciate the props you give us for being the first ones to put a book together in this area, and your words of support for our experience. Hopefully it was the first of several to be contributed by many authors. As you said: "…the book provides a good repository of high-value experience-based perspectives that are not readily available elsewhere. Even if the book’s recommendations are debatable, it’s a debate worth having."
We asked for this debate, and you've engaged, so lets go!
Not Enough CitationsBryce:Hi Eric—thank you for the insightful critique. It is exactly this level of dialog that we'd hoped the book would inspire, and many of your points are dead-on.
In particular, calling us out for a paucity of cited references stings a
bit (tho' deservedly so!) Randy and I made the decision early on that we would consciously avoid writing a 'survey' book—one focused on cataloging the various market- and academe-based approaches to reputation.
And, you're right, there is a deep, rich vein of prior art to be explored there—digesting all of it, and putting it into a consumer-friendly format for product and industry folks would indeed be a fantastic resource. It's just not the book we chose to write. (And the Ariely references? Yeah, I kinda feel those stick out like a sore thumb, too—we'll definitely want to leaven the text with richer references if we get a crack at a second edition. Suggestions are welcome!)
Randy: It's tough to pick a target market for a book. Our primary experience was with product managers and web designers who were making very basic reputation system design errors. I've tried getting them to read whitepapers on ratings, reviews, reputation—and honestly it wasn't worth their time (though it should have been!) But the problem of what to cite is worse than that—the fact we were writing the very first book on the subject is a testament to how difficult it is to actually find good source material.
Much of your critique (and our response) is about terminology and
usage in this new domain—even just doing searches is non-trivial. Heck, one of the key repositories we cite in the appendix [web.si.umich.edu/reputations/] hadn't been updated in years and now appears to be gone (or moved somewhere I can't find at the moment).
For example, you say "[w]e implemented a very similar system embodying these two points back in 2000-01 at Epinions"—where is this documented? Links please! (I didn't see them in your post.) If it exists, I either missed it because I didn't know the correct keywords, or it didn't get enough link-love to show up when I tried. It would have been fantastic to be able to point at proof before embarking on the uphill battle to convince Yahoo! product managers to even try allowing the users to moderate the worst-of-the-worst content.
I so look forward to the day that the stuff in our book is common-knowledge—but it isn't even close. This isn't the first new field of study I've been an early pioneer in: I'm the co-author, with Chip Morningstar, of The Lesson's of Lucasfilm's Habitat—the first paper on creating and operating avatar virtual worlds, written in 1990 (it too was a practitioner's take on what was, up to then, a largely theory-covered field.) Lessons has been cited in over 100 books and yet there are still people building systems with the errors that Chip and I clearly identified more than 20 years ago! It's a long road we're on together.
BTW, Both Bryce and I would really like to own a copy of the book that Eric
thinks our book could/should have been—does someone want to write it? Or is it really close to what we already have, and you—kind readers—just need to send us the links?
Object Reputation vs. Grading & Filtering
Bryce: I do take issue with one of your criticisms—your dismissal of content reputation as mere "grading and filtering" of content items, and your assertion reputation for content items "does not work."
You're mistaking a useful application of reputation (the ability to sort and promote/demote, which we cover in Chapter 8) with an attribute of the object being sorted: quality, freshness, popularity, etc. These attributes are determined, of course, by community concensus and—as it turns out—there's already a pretty good term for 'a general concensus about something arrived at by a number of sources, some of them
known to you and some of them not': it's reputation.
While it's true that certain types of content are fairly immutable, the contexts in which they're embedded are infinitely variable, and make reputation an invaluable way to think about, and tabulate, these attributes.
Let's take music as an example: an MP3 track is generally fixed and, you're correct, "does not change its character unless subsequently edited." So, perhaps (and I'm actually not willing to concede this point, but more on this later) "reputation" for one particular track may be of limited value.
But how about a song? How about a specific performance of a song? Try telling any of the contributors to The DeadLists Project that 'a song is a song is a song.' They've cataloged over 40 years of concert recordings from Grateful Dead shows, and can probably tell you exactly which performances of "Stella Blue" are the superior, must-listen experiences. Different context, different expectations for reputation.
Further, how about a playlist—one in which songs appear and disappear over time, coming in and out of rotation? The tracks themselves don't change, but collections of content objects most certainly do. Tracking the reputation of a collection gives consumers valuable information to
judge that asset: am I likely to like the types of songs featured here? (Google 'Billboard Payola Scandal' and then tell me that influencing content reputation hasn't historically been a very lucrative endeavor.)
Of course, content doesn't merely spring forth like Athena from the forehead of Zeus—no, people create content. So, many times, content reputation is useful as a kind of "proxy reputation" for a person (its creator). What's the best way to know an artists reputation? Why, look at how their works are received: how many downloads, how many sales, remixes, adds to playlists. These things are generally a much better indicator of an artist's impact than who they're dating or what hotel room they've trashed lately.
It's our contention that people and content reputation are inextricably
intertwined: to even attempt to assess one in the absence of the other, would be—and for many failed startups, has been—an exercise in futility.
And, as promised, a return to your initial point: that content doesn't change over time. This is a question that goes back at least as far as Socrates and the Sophists: are the qualities of a thing intrinsic to the thing itself, or imparted instead by the context that we situate it within? I (and, generally,
subsequent history, Aristotle notwithstanding) would argue the latter.
So, Mark Twain's Huck Finn, barring some minor edits and censored bits over the years, is indeed the same text that it's always been. But I don't think anyone would seriously argue that its reputation (our shared perception of its value, it's place in our cultural fabric) hasn't changed drastically over the years.
The exact same thing takes place, on smaller scales and with less evident effects, every time someone favorites a video on YouTube, or 'Bans' an artist from their Last.fm personal channel.
Randy: Interesting that you call out Karma as a confusing term for person-reputation, I see it in a lot of white papers these days. :-) None the less, all terminology should be up for debate at this point. Sorting out entity-reputation from person-reputation is important—the naming of names is negotiable. Any counter-suggestions?
Randy: Again, we're so grateful for Eric to kickstart the debate on these important issues. As I've said to more than one dejected-looking peer "Don't be sad that I'm critical of your ideas—that
means they are interesting enough to criticize! If I didn't like them, I'd just go do something else and ignore them." I'm now accepting that advice myself. Here's hoping that the issues around reputation systems remain interesting enough to continue criticism, discussion, and refinement.
Bryce & Randy
Please, peers, leave comments here - if there's enough interest we're happy to move the debate to the wiki…