Reviewers have very strong positive or negative views and appear to be unduly influenced by brand name, price - or personal connection.
In other words, such reviews are highly subjective. JL
Cathleen O'Grady reports in ars technica:
The kind of person who writes a review isn’t a good representative of all people who bought the product. Review-writers are likely to be people who have had either a very positive or very negative response. Only a few people rate a particular product. A small sample size makes the average rating less reliable. CR is considered a reasonable approximation of objective quality. There was a very poor correlation between user ratings and CR scores
User ratings are often a good way to make choices about a purchase, but they come with some inherent weaknesses. For a start, they suffer badly from sampling bias: the kind of person who writes a review isn’t necessarily a good representative of all people who bought the product. Review-writers are likely to be people who have had either a very positive or very negative response to a product. And often, only a few people rate a particular product. Like an experiment with a small sample size, this makes the average rating less reliable.
It turns out people are pretty bad at taking these weaknesses into account when they assess online product ratings, according to a recent paper in the Journal of Consumer Research. The authors found that Amazon ratings might not be the best way to predict the quality of a product, and these reviews often include more subjective judgments that don't get taken into account by potential buyers.
To assess the quality of user ratings, the researchers used ratings from Consumer Reports (CR), a user-supported organization that buys products and tests them rigorously before assigning a score. Generally, CR is considered a reasonable approximation of objective quality within a few different academic fields. To test the reliability, the researchers took CR scores for 1,272 products and compared them to more than 300,000 Amazon ratings for the same items.
They found that there was a very poor correlation between user ratings and CR scores: products with higher user ratings weren’t particularly likely to have great CR scores. For around a third of the product categories tested, the correlations were actually negative (that is, the higher the CR score, the lower the Amazon ratings). This correlation was especially bad for products that had a small number of ratings or ratings that varied wildly. However, even products with high numbers of very similar ratings didn’t correlate particularly well.
Perhaps more worrying is that user ratings seem to be heavily influenced by subjective factors like brand image. Premium brands and more expensive products had inflated user ratings.
It’s possible that expensive, premium products are better quality and therefore deserving of those ratings. However, the researchers controlled for CR scores to get a sense for whether premium products were rated higher regardless of their quality. They were—and brand image explained a lot more of the variability in the ratings than quality did.
People who base their purchasing decisions on user ratings are obviously getting information that extends beyond quality. Potential purchasers might also be interested in features like aesthetics, particular capabilities, or other, more subjective measures of satisfaction. So a decision to purchase based on user ratings can’t necessarily be taken as showing that people are making serious quality judgments on the basis of reviews.
However, when the researchers looked at how people made guesses about product quality in particular, they found they weren’t particularly well-equipped to disregard the limitations of user ratings.
Researchers asked participants to search for pairs of products on Amazon and then judge which product they thought CR would rate more highly on the basis of quality. In a follow-up study, they asked participants to judge which product was better quality, leaving out the question of CR altogether. Even when products only had small numbers of reviews, people based their judgments about quality very strongly on average user rating.
This study only touches on one problem with online reviews. Failure to take a critical look at a product is only the tip of the iceberg when it comes to unreliability in user ratings. Fictitious reviews posted by marketers and “herd” effects can see reviewers influenced by earlier reviews. Some companies even make special offers in return for positive reviews.
Then there’s the simple problem that large groups of people are fallible: they give nasty reviews because they didn’t like the packaging, or they review the wrong product, or just have wildly differing ideas about how to assess something. Psychologically, there's the risk of confirmation bias, as a reader who really likes the sound of an item may just want confirmation that it’s all OK before going ahead with the purchase.
Despite all these problems, it’s difficult to think of a good alternative to a large sample of user ratings. Although it’s taken as a given by the authors (and, as they write, in a lot of academic literature) that CR ratings are a good, objective measure of quality, user ratings do have some advantages. For instance, if only a handful of people out of hundreds of reviewers complain about a fault with a product, it’s possible to get a sense of failure rate, which isn’t possible with a single CR score. There’s also all the information contained in the written reviews, which may be useful in ways that can’t be addressed by this study.
The authors have some suggestions for how people can get better information from user ratings. The first and most obvious one is sample size: if there are only a handful of reviews for a product, it’s unwise to infer much about the product’s quality based on those reviews. But even if there’s a healthy sample size, it’s always good to look for other sources of information, too
0 comments:
Post a Comment