The interesting question is whether machine learning will enable the company - and other social media - to get ahead of this abuse and possibly stifle it. JL
Manish Singh reports in Venture Beat:
WhatsApp bans 2 million accounts each month. A machine learning system uses the company’s past dealings with problematic accounts and from specific scenarios. (It) bans 20% of bad accounts at the time of registration. WhatsApp looks at factors including the user’s IP address, the country of origination for phone numbers used to sign up, how old the account is, and whether that account started sending a lot of texts as soon as it was created. 75% of the accounts banned are handled without human intervention. (It) has enforced a limit on the number of texts that can be forwarded to other users.
WhatsApp today outlined how it is tackling the spread of misinformation on its instant messaging platform as India heads into its biggest election. The country is WhatsApp’s largest market and one in which its service has been closely scrutinized by the government.
At a press briefing VentureBeat attended in New Delhi early today, company executives said they have built a machine learning system to detect and weed out users who engage in inappropriate behavior, such as sending bulk messages and creating multiple accounts with the sole purpose of spreading questionable content on the platform.
Automated fake accounts and people who seek to create havoc are barred from the platform at various stages — at the time of registration, while messaging, and when they are reported by others, the company’s executives said.
Bans
Overall, WhatsApp bans about 2 million accounts on its platform each month, a spokesperson said. To address this issue, a machine learning system uses learnings from the company’s past dealings with problematic accounts and from specific scenarios engineers followed when taking down accounts, said Matt Jones, a software engineer at WhatsApp. This machine learning system has reached a level of sophistication that allows it to ban 20 percent of bad accounts at the time of registration, according to the company.
In terms of overall flags, Jones said WhatsApp looks at various factors, including the user’s IP address and the country of origination for phone numbers used to sign up for the service (and whether both are pointing to the same location), how old the account is, and whether that account started sending a lot of texts as soon as it was created. Seventy-five percent of the 2 million accounts WhatsApp bans in a month are handled without human intervention or a report filed by a user, said Carl Woog, a spokesperson for WhatsApp. WhatsApp is used by over 1.5 billion people, and according to industry estimates, it is now Facebook’s most popular service.
“As with any communications platform, sometimes people attempt to exploit our service. Some may want to distribute click-bait links designed to capture personal information, while others want to promote an idea. Regardless of the intent, automated and bulk messaging violates our terms of service, and one of our priorities is to prevent and stop this kind of abuse,” the company said.
Jones said WhatsApp has identified various ways users abuse the platform, including through special software that allows individuals to run multiple instances of different WhatsApp accounts on the same computer. The company also found special devices that support dozens of SIM cards, he said. WhatsApp says it has separate channels, teams, and APIs in place to keep a check on businesses using the platform.
India
WhatsApp’s push to contain questionable behavior comes as the company faces mounting pressure from several governments, including India’s. Beyond issues of election fairness, false information spread in India through WhatsApp has incited violence that cost dozens of lives, and similar issues have contributed to ethnic violence in Myanmar.
To counter that kind of abuse, the company has enforced a limit on the number of texts that can be forwarded to other users, along with a handful of additional product changes. The company has also run educational campaigns and advertisements on radio, television, and the web warning people to be cautious about what texts they share on the messaging platform. WhatsApp has also partnered with fact checkers Boom Live and Alt News, as well as news consortium Ekta. The company says it is also working with law enforcement and ramping up its local team.
In January of this year, WhatsApp partnered with BuffaloGrid, a U.K.-based startup that makes solar-powered mobile phone charging stations, to extend the reach of its campaign in India. The display on these grid machines can show advertisements or other messages, as an executive with the company — which has also inked deals with telecom operators in the country — demonstrated to VentureBeat on the sidelines of a recent event.
The general election in India commences in April and will test how prepared WhatsApp is to tackle misinformation on its platform. Leading up to this election, and during previous elections in the country, several prominent political parties have used WhatsApp groups to promote their platforms. However, WhatsApp said today that 90 percent of conversations on the platform happen between two users and that an average group consists of fewer than 10 members.
A case study published through Harvard Business Publishing looked at the election campaign of BJP (the political party currently in power) in the 13th Legislative Assembly election and found that the party was able to reach a wider audience on WhatsApp than on Facebook, and at a fraction of the cost. WhatsApp executives said they have spoken with representatives of political parties in India to outline appropriate use of the platform. These executives declined to comment whether the company has seen any progress on this front inrecent state elections.
0 comments:
Post a Comment