But that doesnt mean they will continue to do so. JL
Jack Balkin comments in Yale School of Management Insights, illustration by Gary Waters:
Our economic system of capitalism and technological innovation increasingly pays for itself through monetizing personal data. We can assume that just as advertisers have gotten better at targeting messages to get people to buy things, (they) will only get better at manipulating voters. A business that monetizes personal data through its control of advertising networks (w)ouldn’t, without a push, change the way it does business.
What does the Facebook–Cambridge Analytica scandal say about the risks of trusting Facebook and other tech companies with our data? We talked with Jack Balkin, Knight Professor of Constitutional Law and the First Amendment at Yale Law School and founder and director of the Information Society Project, whose proposal for making online companies “information fiduciaries” has received new attention in the wake of the scandal.
What are we risking as consumers if companies don’t protect the data we give them online? Our economic system of capitalism and technological innovation increasingly pays for itself through monetizing personal data, which is a polite way of saying that it pays for itself through surveillance, manipulation, and control of end users. There is a well-known saying in Silicon Valley that big data is the new oil—a resource easily collected and waiting to be harnessed to drive the engines of the digital economy. I would also say that big data is Soylent Green—it is made out of people and used to govern people.
The risk is not simply to us as consumers—there are also risks to us as citizens. The Cambridge Analytica scandal showed that data is not just something that companies use to get rich; it is also something that political actors and governments can use as well. Cambridge Analytica wanted to influence voters in 2016, and so did the Russian government. It’s not clear how effective Cambridge Analytica really was in influencing voters, just as it’s not clear how effective the armies of Russian bots were. Nevertheless, we can assume that just as advertisers have gotten better and better at targeting messages to get people to buy things, companies like Cambridge Analytica and governments who work with them will only get better and better at manipulating voters over time.
Finally, governments want access to personal data not only to manipulate but also to surveil. Data helps them keep tabs on people—both their own citizens and people around the world.
Do you think the Facebook–Cambridge Analytica scandal will change anything about either public perception of this issue or how tech companies think about data?
The Facebook–Cambridge Analytica episode was the kind of scandal that showed how our current system of data capitalism actually operates. It was a bit like the Snowden revelations in the summer of 2013, which showed how what I call the National Surveillance State actually works. Just like those revelations, however, the jury is still out on whether the episode will lead to serious changes in law and public policy.
The scandal has certainly been a public relations disaster for Facebook, which is trying to limit the damage through a series of announcements about new reforms and programs. However well-meaning its stated goals of creating community and connecting people with information that is meaningful to them, we should not forget that Facebook is centrally a business that monetizes personal data through its control of advertising networks. Thus, we shouldn’t expect that, without a push from governments, Facebook will significantly change the way it does business.
The European Union’s General Data Protection Regulation (GDPR), for example, has required Facebook (and many other companies) to make changes in how it does business; so too would credible threats of anti-competition and antitrust scrutiny. The United States currently lacks comprehensive protection of personal privacy for consumers and citizens, and antitrust enforcement has been moribund for far too long. I don’t think that there will be significant movement on this front until there is a change of government in the United States, and one or both of the two major political parties finds data privacy to be a winning issue with voters.
You’ve proposed a framework for making online companies “information fiduciaries” that have a legal obligation to protect our data. What would that mean in practical terms?
We rely on digital companies to perform many different tasks for us. They know a lot about us; we don’t know very much about them. As a result, we are especially vulnerable to them, and we have to trust that they won’t betray us or manipulate us for their own ends.
The law has long recognized that clients or patients of professionals like doctors and lawyers are in a similar situation--we need to trust them with sensitive personal informational about ourselves, but they could injure us as a result. Therefore the law treats them as fiduciaries. Fiduciary relationships are relationships of good faith and loyalty toward people who are in special positions of vulnerability. Fiduciaries have special duties of care and loyalty toward their clients and beneficiaries. The kinds of duties they have depend on the nature of their business, so that digital companies won’t have all of the same obligations as doctors and lawyers currently do. Even so, digital companies will owe a duty of trustworthiness and confidentiality with respect their customers. Facebook’s founder, Mark Zuckerberg, has publicly described the scandal as a “breach of trust” toward his end users. I would agree.
What would the idea of information fiduciaries mean in practice? Let’s use the Cambridge Analytica story as an example. Under the model, Facebook has a duty of confidentiality, a duty of care, and a duty of loyalty.
The duties of confidentiality and care mean that Facebook must keep its customers’ data confidential and secure. It has to make sure that anyone who shares or uses the data is trustworthy and legally bound by the same legal requirements as Facebook is. It must vet its potential partners to make sure that they are fully trustworthy, subject them to regular audits, and, if they violate the terms of their agreements, sue them and get back all the data that Facebook shared with them. Facebook failed at each of these obligations. It did not keep its data secure and it did not vet or audit its partners properly.
What about the duty of loyalty? Facebook’s most basic duty is not to act as a con artist. It has a duty not to hold itself out as a trustworthy organization that will look out for its end users’ interests—in order to induce their trust—and then turn around and betray that trust by harming and manipulating its end users to its own benefit.
Here too, Facebook fell short. By now people more or less expect that Facebook will serve them ads based on data it collects. So the mere fact that Facebook makes money from targeted ads does not by itself violate the duty of loyalty. But it’s one thing for Facebook to serve you ads for shampoo in your news feed; it’s quite another to give access to your personal information to businesses like Cambridge Analytica that are deliberately trying to manipulate you by serving political ads. The objection here is not that the ads are political, but that this is an unexpected use of personal data that many people would find offensive and a breach of trust.
Facebook may have also breached its duty of loyalty by designing its system to addict its end users, and by performing social science experiments on its end users without the equivalent of a human subjects review board to prevent overreaching and manipulation.
Some parts of Facebook’s business model are fully consistent with being an information fiduciary. But far too often, it has abused its end users’ trust. The point of this approach is to hold it to a higher standard of trustworthiness precisely because of the power it has over its customers.
0 comments:
Post a Comment