Jennifer Baker reports in The Next Web:
“The tension between the integrity of electoral systems and an unregulated digital sphere have become an inherent danger to democracies worldwide." The twin shocks of the Brexit referendum and the US presidential election in 2016 point the finger at microtargeting. But the volume, velocity and vectors for information have all increased exponentially, while who takes responsibility for generating and disseminating that information grows ever more opaque. We want responsibility and accountability. And transparency appears to be the first step on that road. (But) do we want Big Tech to become the arbiter of truth?
We live in an Age of Disinformation. The phenomenon has over the past few years, slowly, but surely risen to the top of the political agenda.
Political campaigning — often verging on propaganda — has existed for centuries, so what’s changed. Why has the transition to a digital advertising space caused so much concern? What is it about this new way of communicating that has allowed the increase of lies and manipulation of the general public?
The twin shocks of the Brexit referendum and the US presidential election in 2016 — and the related Facebook-Cambridge Analytica scandal — point the finger at microtargeting. But there is also the very real tidal wave of information that individuals struggle to deal with. The volume, velocity and vectors for information have all increased exponentially, while who takes responsibility for generating and disseminating that information grows ever more opaque.
At a recent event in Brussels, the European Partnership for Democracy (EPD) argued that “the tension between the integrity of electoral systems and a vastly unregulated digital sphere have arguably become an inherent danger to democracies worldwide. It is clear that additional safeguards are needed that would allow regulators, and the public more generally, to understand who is funding what online.”
While it could certainly be argued that electoral law in many European countries is badly in need of an upgrade, at EU level, some — tentative — steps are being taken. European Commission President-elect Ursula von der Leyen announced her plans for a European Democracy Action Plan that should include legislative proposals to guarantee transparency in political advertising. And in 2018, a self-regulatory Code of Practice was signed between the European Commission, major tech companies — Google, Facebook, Twitter, Microsoft and Mozilla — and advertisers.
However, self-regulation lacks teeth. Asking companies to be accountable to themselves does not necessarily guarantee results.
Before the European Parliament elections of 2019, the EPD commissioned research to be conducted in three countries, the Czech Republic, Italy and the Netherlands, in order to monitor the level to which tech platforms comply with the Code of Practice against disinformation on matters related to digital political advertising.
The results were not reassuring. “We believe that this central part of the connected society cannot be left to voluntary systems of company-level self-regulation, but should be subject to legal accountability and regulatory scrutiny in order to protect democracy and freedom of speech online,” explained Ruth-Marie Henckes, EPD Advocacy and Communications Officer.
At the end of October, the European Commission published the first annual self-assessment of the signatories to the Code. Despite the opportunity for greater transparency, the Commission concludes that “further serious steps by individual signatories and the community as a whole are still necessary.”
Action taken by the platforms “vary in terms of speed and scope” and in general “lag behind the commitments” made, while “cooperation with fact-checkers across the EU is still sporadic and without a full coverage of all Member States and EU languages.”
“Overall, the reporting would benefit from more detailed and qualitative insights in some areas and from further big-picture context, such as trends. In addition, the metrics provided so far are mainly output indicators rather than impact indicators,” said the Commission.
The Commission plans to carry out a comprehensive assessment of the effectiveness of the Code to be presented in early 2020, and will also take into account input from the European Regulators Group for Audiovisual Media Services (ERGA), evaluations from a third party selected by the signatories and an independent consultant engaged by the Commission, as well as a report on the 2019 elections to the European Parliament.
The issue of transparency is one that is raised again and again, including at the EPD Virtual Insanity event. Although it was widely agreed that users should be able to understand why they are seeing an ad and what data was used to target that ad, there remain obstacles.
There are inherent difficulties in defining a “political” ad. Different platforms use different definitions. Banning all political ads, as Twitter has recently done, can result in blocking some political content, such as climate change activism, that is not about targeting a particular election or referendum.
Google has also announced major changes to its political ads policy — ”political advertisers” will only be able to target ads based on users’ age, gender and location. But who decides who is and isn’t a “political advertiser”? Only political parties? Third parties? Pressure groups? What is and is not designated “political” remains murky at best.
Given this, one might have some small amount of sympathy for Facebook’s controversial decision to allow politicians an exemption from its own ban on false claims in advertising. Fair enough if they say it is not their job to fact-check politicians, but examining the role of these companies leads to the conclusion that they are overwhelmingly the gatekeepers of public discourse online. It is not inherently wrong that social networks have their own ethical guidelines on what can and cannot be published on their platform — indeed, do we want Big Tech to become the arbiter of truth?
The answer seems to be that we want responsibility and accountability. And transparency appears to be the first step on that road. Platforms should at the very least be proactive in complying with the many rules already in place for social networks, both on EU and national level.
Clearly illegal content, such as hate speech, should be routinely flagged and taken down. Users should be informed about about when and how this is happening — current transparency reports rarely explain which content has been removed and why, and whether and how a particular user was targeted.
Given the scale of the problem and the potential harmful consequences for democracy, disinformation needs to be taken seriously.
Even more sophisticated possibilities for manipulation, so-called deepfakes, are likely to emerge in coming years. Not alone are they able to mislead convincingly, but they will also have a wider ranging “chilling effect” as people are increasingly unable to tell what is true or false and will dismiss even factual content as “fake.” A failure to take this will play straight into the hands of the despots and disrupters of democracy and Europe needs to act now.
0 comments:
Post a Comment