A Blog by Jonathan Low

 

Mar 18, 2020

AI Is An Ideology, Not A Technology

And the question that spurs is how it affects funding, future development and public acceptance of AI. JL

Jaron Lanier comments in Wired:

The term AI doesn’t delineate specific technological advances. AI only references subjective tasks that we classify as intelligent. “AI” is best understood as a political and social ideology rather than a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can become autonomous from and replace, rather than complement, much of humanity. This has strong resonances with technocracy and central-planning, which viewed as inevitable the replacement of most human judgement/agency with systems created by a small technical elite.
A leading anxiety in both the technology and foreign policy worlds today is China’s purported edge in the artificial intelligence race. The usual narrative goes like this: Without the constraints on data collection that liberal democracies impose and with the capacity to centrally direct greater resource allocation, the Chinese will outstrip the West. AI is hungry for more and more data, but the West insists on privacy. This is a luxury we cannot afford, it is said, as whichever world power achieves superhuman intelligence via AI first is likely to become dominant.
If you accept this narrative, the logic of the Chinese advantage is powerful. What if it’s wrong? Perhaps the West’s vulnerability stems not from our ideas about privacy, but from the idea of AI itself.
After all, the term "artificial intelligence" doesn’t delineate specific technological advances. A term like “nanotechnology” classifies technologies by referencing an objective measure of scale, while AI only references a subjective measure of tasks that we classify as intelligent. For instance, the adornment and “deepfake” transformation of the human face, now common on social media platforms like Snapchat and Instagram, was introduced in a startup sold to Google by one of the authors; such capabilities were called image processing 15 years ago, but are routinely termed AI today. The reason is, in part, marketing. Software benefits from an air of magic, lately, when it is called AI. If “AI” is more than marketing, then it might be best understood as one of a number of competing philosophies that can direct our thinking about the nature and use of computation.
A clear alternative to “AI” is to focus on the people present in the system. If a program is able to distinguish cats from dogs, don’t talk about how a machine is learning to see. Instead talk about how people contributed examples in order to define the visual qualities distinguishing “cats” from “dogs” in a rigorous way for the first time. There's always a second way to conceive of any situation in which AI is purported. This matters, because the AI way of thinking can distract from the responsibility of humans.
AI might be achieving unprecedented results in diverse fields, including medicine, robotic control, and language/image processing, or a certain way of talking about software might be in play as a way to not fully celebrate the people working together through improving information systems who are achieving those results. “AI” might be a threat to the human future, as is often imagined in science fiction, or it might be a way of thinking about technology that makes it harder to design technology so it can be used effectively and responsibly. The very idea of AI might create a diversion that makes it easier for a small group of technologists and investors to claim all rewards from a widely distributed effort. Computation is an essential technology, but the AI way of thinking about it can be murky and dysfunctional.
You can reject the AI way of thinking for a variety of reasons. One is that you view people as having a special place in the world and being the ultimate source of value on which AIs ultimately depend. (That might be called a humanist objection.) Another view is that no intelligence, human or machine, is ever truly autonomous: Everything we accomplish depends on the social context established by other human beings who give meaning to what we wish to accomplish. (The pluralist objection.) Regardless of how one sees it, an understanding of AI focused on independence from—rather than interdependence with—humans misses most of the potential for software technology.
Supporting the philosophy of AI has burdened our economy. Less than 10 percent of the US workforce is officially employed in the technology sector, compared with 30–40 percent in the then leading industrial sectors in 1960s. At least part of the reason for this is that when people provide data, behavioral examples, and even active problem solving online, it is not considered “work” but is instead treated as part of an off-the-books barter for certain free internet services. Conversely, when companies find creative new ways to use networking technologies to enable people to provide services previously done poorly by machines, this gets little attention from investors who believe “AI is the future,” encouraging further automation. This has contributed to the hollowing out of the economy.
Bridging even a part of this gap, and thus reducing the underemployment of workforces in the rich world, could expand the productive output of Western technology far more than greater receptiveness to surveillance in China does. In fact, as recent reporting has shown, China’s greatest advantage in AI is less surveillance than a vast shadow workforce actively labeling data fed into algorithms. Just as was the case with the relative failures of past hidden labor forces, these workers would become more productive if they could learn to understand and improve the information systems they feed into, and were recognized for this work, rather than being erased to maintain the “ignore the man behind the curtain” mirage that AI rests on. Worker understanding of production processes empowering deeper contributions to productivity were the heart of the Japanese Kaizen Toyota Production System miracle in the 1970s and 1980s.
To those who fear that bringing data collection into the daylight of acknowledged commerce will encourage a culture of ubiquitous surveillance, we must point out that it is the only alternative to such a culture. It is only when workers are paid that they become citizens in full. Workers who earn money also spend money where they choose; they gain deeper power and voice in society. They can gain the power to choose to work less, for instance. This is how worker conditions have improved historically.
It is not surprising that quantitative technical and economic arguments converge on the centrality of human value. Estimates suggest that the total computational capacity of a single human mind is greater than that of all today’s computers in the world put together. With the pace of processor improvements slowing as Moore’s law ends, the prospects of this changing dramatically anytime soon are dim.
Nor is such a human-centric approach to technology simply a theoretical possibility. Tens of millions of people every day use video conferencing to deliver personal services, such as language and skill instruction, online. Online virtual collaboration spaces like GitHub are central to value creation in our era. Virtual and augmented reality hold out the prospect of dramatically increasing what is possible, allowing more types of collaborative work to be performed at great distances. Productivity software from Slack to Wikipedia to LinkedIn to Microsoft product suites make previously unimaginable real-time collaboration omnipresent.
Indeed, recent research has shown that without the human-created Wikipedia, the value of search engines would plummet (since that is where the top results of substantial searches are often found), even though search services are touted as frontline examples of the value of AI. (And yet the Wikipedia is a thread-bare nonprofit, while search engines are some of the most highly valued assets in our civilization.) Collaboration technologies are helping us work from home through the Covid-19 epidemic; it has become a matter of survival, and the future promises ways where long-distance collaboration may become ever more vivid and satisfying.
To be clear, we are great enthusiasts for the methods most discussed as illustrations of the potential of AI: deep/convolution networks and so on. These techniques, however, rely heavily on human data. For example, Open AI’s much celebrated text-generation algorithm was trained on millions of websites produced by humans. And evidence from the field of machine teaching increasingly suggests that when the humans generating the data are actively engaged in providing high-quality, carefully chosen input, they can train at far lower costs. But active engagement is possible only if, unlike in the usual AI attitude, all contributors, not just elite engineers, are considered crucial role players and are financially compensated.
A powerful gut response from some AI enthusiasts, after reading this far, might be that we have to be wrong, because AI is starting to train itself, without people. But AI without human data is only possible for a narrow class of problems, the kind that can be defined precisely, not statistically, or based on ongoing measures of reality. Board games like chess and certain scientific and math problems are the usual examples, though even in these cases human teams using so-called AI resources usually outperform AI by itself. While self-trainable examples can be important, they are rare and not representative of real-world problems.
“AI” is best understood as a political and social ideology rather than as a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity. Given that any such replacement is a mirage, this ideology has strong resonances with other historical ideologies, such as technocracy and central-planning-based forms of socialism, which viewed as desirable or inevitable the replacement of most human judgement/agency with systems created by a small technical elite. It is thus not all that surprising that the Chinese Communist Party would find AI to be a welcome technological formulation of its own ideology.
It’s surprising that leaders of Western tech companies and governments have been so quick to accept this ideology. One reason might be a loss of faith in the institutions of liberal democratic capitalism during the last decade. (“Liberal” here has the broad meaning of a society committed to universal freedom and human dignity, not the narrower contemporary political one.) Political economic institutions have not just been performing poorly in the last few decades, they’ve directly fueled the rise of hyper-concentrated wealth and political power in a way that happens to align with the elevation of AI to dominate our visions of the future. The richest companies, individuals, and regions now tend to be the ones closest to the biggest data-gathering computers. Pluralistic visions of liberal democratic market societies will lose out to AI-driven ones unless we reimagine the role of technology in human affairs.
Not only is this reimagination possible, it’s been increasingly demonstrated on a large scale in one of the places most under pressure from the AI-fueled CCP ideology, just across the Taiwan Strait. Under the leadership of Audrey Tang and her Sunflower and g0v movements, almost half of Taiwan’s population has joined a national participatory data-governance and -sharing platform that allows citizens to self-organize the use of data, demand services in exchange for these data, deliberate thoughtfully on collective choices, and vote in innovative ways on civic questions. Driven neither by pseudo-capitalism based on barter nor by state planning, Taiwan’s citizens have built a culture of agency over their technologies through civic participation and collective organization, something we are starting to see emerge in Europe and the US through movements like data cooperatives. Most impressively, tools growing out of this approach have been critical to Taiwan’s best-in-the-world success at containing the Covid-19 pandemic, with only 49 cases to date in a population of more than 20 million at China’s doorstep.
The active engagement of a wide range of citizens in creating technologies and data systems, through a variety of collective organizations offers an attractive alternative worldview. In the case of Taiwan, this direction is not only consistent with but organically growing out of Chinese culture. If pluralistic societies want to win a race not against China as a nation but against authoritarianism wherever it arises, they cannot make it a race for the development of AI which gives up the game before it begins. They must do it by winning on their own terms, terms that are more productive and dynamic in the long run than is top-down technocracy, as was demonstrated during the Cold War.
As authoritarian governments try to compete against pluralistic technologies in the 21st century, they will inevitably face pressures to empower their own citizens to participate in creating technical systems, eroding the grip on power. On the other hand, an AI-driven cold war can only push both sides toward increasing centralization of power in a dysfunctional techno-authoritarian elite that stealthily stifles innovation. To paraphrase Edmund Burke, all that is necessary for the triumph of an AI-driven, automation-based dystopia is that liberal democracy accept it as inevitable.

0 comments:

Post a Comment