As the rhetoric below from a Conservative British cabinet minister suggests, patience with social media's 'we're a force for good' posture while they continue to focus on audience growth and ad sales over security is wearing thin. JL
Natasha Lomas reports in Tech Crunch:
The government is considering a tax on tech firms to cover the rising costs of policing related to online radicalization. “We should stop pretending that because they sit on beanbags in T-shirts they are not ruthless profiteers. They will ruthlessly sell our details to loans and soft-porn companies but not give it to our democratically elected government.”
The UK government has kicked off the new year with another warning shot across the bows of social media giants.In an interview with the Sunday Times newspaper, security minister Ben Wallace hit out at tech platforms like Facebook and Google, dubbing such companies “ruthless profiteers” and saying they are doing too little to help the government combat online extremism and terrorism despite hateful messages spreading via their platforms.
“We should stop pretending that because they sit on beanbags in T-shirts they are not ruthless profiteers. They will ruthlessly sell our details to loans and soft-porn companies but not give it to our democratically elected government,” he said.
Wallace suggested the government is considering a tax on tech firms to cover the rising costs of policing related to online radicalization.
“If they continue to be less than co-operative, we should look at things like tax as a way of incentivizing them or compensating for their inaction,” he told the newspaper.
Although the minister did not name any specific firms, a reference to encryption suggests Facebook-owned WhatsApp is one of the platforms being called out (the UK’s Home Secretary has also previously directly attacked WhatsApp’s use of end-to-end encryption as an aid to criminals, as well as repeatedly attacking e2e encryption itself).
“Because of encryption and because of radicalization, the cost… is heaped on law enforcement agencies,” Wallace said. “I have to have more human surveillance. It’s costing hundreds of millions of pounds. If they continue to be less than co-operative, we should look at things like tax as a way of incentivizing them or compensating for their inaction.
“Because content is not taken down as quickly as they could do, we’re having to de-radicalize people who have been radicalized. That’s costing millions. They can’t get away with that and we should look at all options, including tax,” he added.
Last year in Europe the German government agreed a new law targeting social media firms over hate speech takedowns. The so-called NetzDG law came into effect in October — with a three-month transition period for compliance (which ended yesterday). It introduces a regime of fines of up to €50M for social media platforms that fail to remove illegal hate speech after a complaint (within 24 hours in straightforward cases; or within seven days where evaluation of content is more difficult).
UK parliamentarians investigating extremism and hate speech on social platforms via a committee enquiry also urged the government to impose fines for takedown failures last May, accusing tech giants of taking a laissez-faire approach to moderating hate speech.
Tackling online extremism has also been a major policy theme for UK prime minister Theresa May’s government, and one which has attracted wider backing from G7 nations — converging around a push to get social media firms to remove content much faster.
Responding now to Wallace’s comments in the Sunday Times, Facebook sent us the following statement, attributed to its EMEA public policy director, Simon Milner:
Mr Wallace is wrong to say that we put profit before safety, especially in the fight against terrorism. We’ve invested millions of pounds in people and technology to identify and remove terrorist content. The Home Secretary and her counterparts across Europe have welcomed our coordinated efforts which are having a significant impact. But this is an ongoing battle and we must continue to fight it together, indeed our CEO recently told our investors that in 2018 we will continue to put the safety of our community before profits.
In the face of rising political pressure to do more to combat online extremism, tech firms including Facebook, Google and Twitter set up a partnership last summer focused on reducing the accessibility of Internet services to terrorists.
This followed an announcement, in December 2016, of a shared industry hash database for collectively identifying terror accounts — with the newer Global Internet Forum to Counter Terrorism intended to create a more formal bureaucracy for improving the database.
But despite some public steps to co-ordinate counter-terrorism action, the UK’s Home Affairs committee expressed continued exasperation with Facebook, Google and Twitter for failing to effectively enforce their own hate speech rules in a more recent evidence session last month.
Though, in the course of the session, Facebook’s Milner, claimed it’s made progress on combating terrorist content, and said it will be doubling the number of people working on “safety and security” by the end of 2018 — to circa 20,000.
In response to a request for comment on Wallace’s remarks, a YouTube spokesperson emailed us the following statement:
Violent extremism is a complex problem and addressing it is a critical challenge for us all. We are committed to being part of the solution and we are doing more every day to tackle these issues. Over the course of 2017 we have made significant progress through investing in machine learning technology, recruiting more reviewers, building partnerships with experts and collaboration with other companies through the Global Internet Forum.In a major shift last November YouTube broadened its policy for taking down extremist content — to remove not just videos that directly preach hate or seek to incite violence but also take down other videos of named terrorists (with exceptions for journalistic or educational content).
The move followed an advertiser backlash after marketing messages were shown being displayed on YouTube alongside extremist and offensive content.
Answering UK parliamentarians’ questions about how YouTube’s recommendation algorithms are actively pushing users to consume increasingly extreme content — in a sort of algorithmic radicalization — Nicklas Berild Lundblad, EMEA VP for public policy, admitted there can be a problem but said the platform is working on applying machine learning technology to automatically limit certain videos so they would not be algorithmically surfaceable (and thus limit their ability to spread).
Twitter also moved to broaden its hate speech policies last year — responding to user criticism over the continued presence of hate speech purveyors on its platform despite having community guidelines that apparently forbid such conduct.
A Twitter spokesman declined to comment on Wallace’s remarks.
Speaking to the UK’s Home Affairs committee last month, the company’s EMEA VP for public policy and communications, Sinead McSweeney, conceded that it has not been “good enough” at enforcing its own rules around hate speech, adding: “We are now taking actions against 10 times more accounts than we did in the past.”
But regarding terrorist content specifically, Twitter reported a big decline in the proportion of pro-terrorism accounts being reported on its platform as of September, along with apparent improvements in its anti-terrorism tools — claiming 95 per cent of terrorist account suspensions had been picked up by its systems (vs manual user reports).
It also said 75 per cent of these accounts were suspended before they’d sent their first tweet.
0 comments:
Post a Comment