The demand for holistic understanding. JL
Janine Liberty reports in MIT Media Lab:
We interact numerous times each day with thinking machines that make life easier,“thinking” on their own, acquiring knowledge, building on it and communicating with other thinking machines to make complex judgments and decisions in ways that not even the programmers who wrote their code can explain. The aim is to unite scholars studying machine behavior to recognize complementarities. “The rise of machines making decisions and acti(ng) autonomously calls for a new field of scientific study that looks at them not as products of engineering and computer science but as a new class of actors with their own behavioral patterns and ecology.”
As our interaction with “thinking” technology rapidly increases, a group led by researchers at the MIT Media Lab are calling for a new field of research—machine behavior—which would take the study of artificial intelligence well beyond computer science and engineering into biology, economics, psychology, and other behavioral and social sciences.
“We need more open, trustworthy, reliable investigation into the impact intelligent machines are having on society, and so research needs to incorporate expertise and knowledge from beyond the fields that have traditionally studied it,” said Iyad Rahwan, who leads the Scalable Cooperation group at the Media Lab.
Rahwan, Manuel Cebrian and Nick Obradovich, along with other scientists from the Media Lab convened colleagues at the Max Planck Institutes, Stanford University, the University of California San Diego, and other educational institutions as well as from Google, Facebook, and Microsoft, to publish a paper in Nature making a case for a wide-ranging scientific research agenda aimed at understanding the behavior of artificial intelligence systems.
“We’re seeing the rise of machines with agency, machines that are actors making decisions and taking actions autonomously,” Rahwan said. “This calls for a new field of scientific study that looks at them not solely as products of engineering and computer science but additionally as a new class of actors with their own behavioral patterns and ecology.”
It’s not that economists and political scientists aren’t studying the role of AI in their fields. Labor economists, for example, are looking at how AI will change the job market, while political scientists are delving into the influence of social media on the political process. But this research is taking place largely in silos.
Concurrent with publication of the paper, the Scalable Cooperation group has released a series of interviews with many of its authors. It also is organizing conferences to bring those working on machine behavior in diverse fields together.
“The Media Lab has long applied a wide range of expertise and knowledge to its research and study of thinking machines,” said Joi Ito, the Media Lab’s director. “I’m excited that so many others have endorsed this approach, and by the momentum now building behind it.”
Algorithms, trust, and secrecy
We interact numerous times each day with thinking machines. We may ask Siri to find the dry cleaner nearest to our home, tell Alexa to order dish soap, or get a medical diagnosis generated by an algorithm. Many such tools that make life easier are in fact “thinking” on their own, acquiring knowledge and building on it and even communicating with other thinking machines to make ever more complex judgments and decisions—and in ways that not even the programmers who wrote their code can fully explain.
Imagine, for instance, a news feed run by a deep neural net recommends an article to you from a gardening magazine, even though you’re not a gardener. “If I asked the engineer who designed the algorithm, that engineer would not be able to state in a comprehensive and causal way why that algorithm decided to recommend that article to you,” said Nick Obradovich, a research scientist in the Scalable Cooperation group and one of the lead authors of the Nature paper.
Parents often think of their children’s interaction with the family personal assistant as charming or funny. But what happens when the assistant, rich with cutting-edge AI, responds to a child’s fourth or fifth question about T. Rex by suggesting, “Wouldn’t it be nice if you had this dinosaur as a toy?”
“What’s driving that recommendation?” Rahwan said. “Is the device trying to do something to enrich the child’s experience—or to enrich the company selling the toy dinosaur? It’s very hard to answer that question.”
So far, no one has found a surefire way to examine all of the important potential consequences of algorithmic decisions on our lives. That means we have no way of assessing whether the choices AI is making for us are any better than decisions made by humans, when we step outside of the specific and narrow objectives for which engineers are optimizing.
Researchers wishing to investigate decision-making by machines often are thwarted by industrial secrecy and legal and intellectual property protections. The source codes and model structures for the most ubiquitous algorithms deployed in society are proprietary, as is the data used to “train” them, so all that can be examined is their inputs and outputs. “Let’s say we want to study how Amazon does its pricing, which might require the creation of fake personas who browse the site for purchases,” said Rahwan. “In doing that, you may be breaking the terms of service and that could be a crime.”
A co-author of the paper, Alan Mislove of Northeastern University, is among a group of academic and media plaintiffs in a lawsuit challenging the constitutionality of a provision of the Computer Fraud and Abuse Act that makes it criminal to conduct research with the goal of determining whether algorithms produce illegal discrimination in areas like housing and employment.
But even if big tech companies decided to share information about their algorithms and otherwise allow researchers more access to them, there is an even bigger barrier to research and investigation, which is that AI agents can acquire novel behaviors as they interact with the world around them and with other agents. The behaviors learned from such interactions are virtually impossible to predict, and even when solutions can be described mathematically, they can be “so lengthy and complex as to be indecipherable,” according to the paper.
So, too, are some of the issues being raised about how and why AI is making decisions—and many of them have knotty ethical questions.
Say, for instance, a hypothetical self-driving car is sold as being the safest on the market. One of the factors that makes it safer is that it “knows” when a big truck pulls up along its left side and automatically moves itself three inches to the right while still remaining in its own lane. But what if a cyclist or motorcycle happens to be pulling up on the right at the same time and is thus killed because of this safety feature?
“If you were able to look at the statistics and look at the behavior of the car in the aggregate, it might be killing three times the number of cyclists over a million rides than another model,” Rahwan said. “As a computer scientist, how are you going to program the choice between the safety of the occupants of the car and the safety of those outside the car? You can’t just engineer the car to be ‘safe’—safe for whom?“
Growing a field
The aim of the paper is not to create a new field from scratch, but rather, to unite scholars studying machine behavior under a single banner to recognize common goals and complementarities. This sort of thing happens all the time in science. A 1963 paper by Dutch biologist and Nobel Prize winner Nikolaas Tinbergen, for instance, raised questions and probed issues that led to the establishment of the field of animal behavior.
Rahwan and the coauthors hope that by naming and surveying the nascent field of machine behavior, they can provide a framework for other researchers, from all fields of inquiry, to build upon. Gathering varied, interdisciplinary perspectives is critical to understanding how to best study, and ultimately live with, these novel intelligent technologies.
0 comments:
Post a Comment