Gideon Rosenblatt comments in The Vital Edge:
When players experience persistent abuse or toxic behavior in a game, they are on average 320% more likely to leave that game and never come back. Toxic behavior isn’t just a conspicuous PR problem for the gaming companies; it costs them real money.
Neuroscientist Jeffrey Lin wants to dramatically reduce people’s toxic behavior in online gaming communities, and he’s using artificial intelligence to do it.
When players experience persistent abuse or toxic behavior in a game, they are on average 320% more likely to leave that game and never come back. Toxic behavior isn’t just a conspicuous PR problem for the gaming companies; it costs them real money.
I’m not a gamer and I don’t generally write about gaming, but it’s clear to me that what Lin and his colleagues at Riot Games are doing deserves attention. They aren’t just shaping the future of gaming and online communities, they are demonstrating how artificial intelligence may one day be used to modify human behavior on very large scales.
Crowdsourcing the Judges
In 2011, Riot Games unveiled something called the “Tribunal” to deal with toxic behavior among the 67 million players of its flagship game, League of Legends. The system allowed players to file reports on abusive players, the most egregious of which were then reviewed by volunteer judges drawn from within the player community. These crowdsourced judges reviewed player feedback, metadata from games, and logs from chat conversations before rendering their decisions. The Tribunal proved a powerful tool for engaging players in policing their own community, and in combination with a number of other player behavior initiatives at Riot, it’s proven quite effective in reducing toxicity.
By profiling players by their frequency of toxic behavior, Lin’s team discovered that the worst 1% of players contributed just 5% of total toxicity in League of Legends, which meant that it wasn’t just a matter of banning a few bad apples. This was a very large-scale challenge of reforming the vast majority of players who simply had occasional outbursts of toxicity.
Last year, Riot took the Tribunal offline to revamp it. Lin recently outlined some of the reasons for that decision; the most interesting of which was that the Tribunal’s feedback was simply too slow. Decisions were taking a week or more, which proved too long of a separation between infraction and consequences for being reported on to even clearly remember what exactly they’d done, let alone meaningfully change their behavior.
The new system would need to be large-scale and provide immediate feedback — both things that machines do quite well.
2 comments:
Interesting, but if you're going to crib someone's article, you could at least give them a link
http://www.the-vital-edge.com/artificial-intelligence-behavior/
Also, you duplicated the first paragraph of the copied text above (1st and 3rd paragraph are identical)
Thanks Jon - and yes, thanks Jeko. Much appreciated.
Post a Comment