Which kinda sums up what happened to the innovative algorithm for which Netflix laid out a cool mill.
To be completely fair, however (not always our first instinct), between the time the competition was announced and the winner declared, a lot had changed. The most significant change being that streaming video had become a viable option for more of the company's customers. This, understandably, resulted in said customers using the Netflix service in new ways.
So it wasn't that the innovation didn't work, it was just not necessarily worth the engineering hassles that implementation would have entailed. This turns out to be a fairly common problem. Long lead times and the pace of change in the digital world are not always sympatico. And that disruptive thing, however philosophically compelling, can get expensive when the systems and people who are to be disrupted are freighted with long term service agreements, employment contracts and other impedimenta of the modern age. Figuring out how to manage the implementation risk is a worthwhile challenge. But one can not help but wonder if there may already be an app for that. JL
Mike Masnick reports in the Innovation by Tech Dirt blog (hat tip Dan Pink):
You probably recall all the excitement that went around when a group finally won the big Netflix $1 million prize in 2009, improving Netflix's recommendation algorithm by 10%. But what you might not know, is that Netflix never implemented that solution itself. Netflix recently put up a blog post discussing some of the details of its recommendation system, which (as an aside) explains why the winning entry never was used. First, they note that they did make use of an earlier bit of code that came out of the contest:
A year into the competition, the Korbell team won the first Progress Prize with an 8.43% improvement. They reported more than 2000 hours of work in order to come up with the final combination of 107 algorithms that gave them this prize. And, they gave us the source code. We looked at the two underlying algorithms with the best performance in the ensemble: Matrix Factorization (which the community generally called SVD, Singular Value Decomposition) and Restricted Boltzmann Machines (RBM). SVD by itself provided a 0.8914 RMSE (root mean squared error), while RBM alone provided a competitive but slightly worse 0.8990 RMSE. A linear blend of these two reduced the error to 0.88. To put these algorithms to use, we had to work to overcome some limitations, for instance that they were built to handle 100 million ratings, instead of the more than 5 billion that we have, and that they were not built to adapt as members added more ratings. But once we overcame those challenges, we put the two algorithms into production, where they are still used as part of our recommendation engine.
Neat. But the winning prize? Eh... just not worth it:
We evaluated some of the new methods offline but the additional accuracy gains that we measured did not seem to justify the engineering effort needed to bring them into a production environment.
It wasn't just that the improvement was marginal, but that Netflix's business had shifted and the way customers used its product, and the kinds of recommendations the company had done, had shifted too. Suddenly, the prize winning solution just wasn't that useful -- in part because many people were streaming videos rather than renting DVDs -- and it turns out that the recommendation for streaming videos is different than for rental viewing a few days later.
One of the reasons our focus in the recommendation algorithms has changed is because Netflix as a whole has changed dramatically in the last few years. Netflix launched an instant streaming service in 2007, one year after the Netflix Prize began. Streaming has not only changed the way our members interact with the service, but also the type of data available to use in our algorithms. For DVDs our goal is to help people fill their queue with titles to receive in the mail over the coming days and weeks; selection is distant in time from viewing, people select carefully because exchanging a DVD for another takes more than a day, and we get no feedback during viewing. For streaming members are looking for something great to watch right now; they can sample a few videos before settling on one, they can consume several in one session, and we can observe viewing statistics such as whether a video was watched fully or only partially.
The viewing data obviously makes a huge difference, but I also find it interesting that there's a clear distinction in the kinds of recommendations people that work if people are going to "watch now" vs. "watch in the future." I think this is an issue that Netflix probably has faced on the DVD side for years: when people rent a movie that won't arrive for a few days, they're making a bet on what they want at some future point. And, people tend to have a more... optimistic viewpoint of their future selves. That is, they may be willing to rent, say, an "artsy" movie that won't show up for a few days, feeling that they'll be in the mood to watch it a few days (weeks?) in the future, knowing they're not in the mood immediately. But when the choice is immediate, they deal with their present selves, and that choice can be quite different. It would be great if Netflix revealed a bit more about those differences, but it is already interesting to see that the shift from delayed gratification to instant gratification clearly makes a difference in the kinds of recommendations that work for people.
1 comments:
dallas love field car rental, car rental age in arizona car rental age in arizona eurocar car rentals
Post a Comment