April Glaser reports in Wired:
AI is making policy challenges already. Human fallibility at the level of input and design makes scholars and policy experts anxious. For machines to learn, they must be fed massive sets of data. And it’s humans, with all their inherent faults, who are doing the feeding. (There must be) accountability for the data fed into these systems to ensure it is accurate. It makes sense to lay the foundation while humans are still at the wheel.
Researchers disagree on when artificial intelligence that displays something like human understanding might arrive. But the Obama administration isn’t waiting to find out. The White House says the government needs to start thinking about how to regulate and use the powerful technology while it is still dependent on humans.
“The public should have an accurate mental model of what we mean when we say artificial intelligence,” says Ryan Calo, who teaches law at University of Washington. Calo spoke last week at the first of four workshops the White House hosts this summer to examine how to address an increasingly AI-powered world.Although scholars and policymakers agree that Washington has a role to play here, it isn’t clear what the path to that policy looks like—even as pressing questions accumulate. They include deciding when and how Google’s self-driving cars take to American highways and examining how bias permeates algorithms.
“One thing we know for sure is that AI is making policy challenges already, such as how to make sure the technology remains safe, controllable, and predictable, even as it gets much more complex and smarter,” said Ed Felten, the deputy US chief of science and technology policy leading the White House’s summer of AI research. “Some of these issues will become more challenging over time as the technology progresses, so we’ll need to keep upping our game.”
AI, Still Puppeteered By People
Although artificial intelligence already exceeds human capabilities in some areas—Google’s AlphaGo repeatedly beat the world’s best Go player—each system’s applications remain narrow, and reliant upon humans. “Intelligence and autonomy are two very different things,” say Oren Etzioni, the director of the nonprofit Allen Institute for Artificial Intelligence and a speaker at Tuesday’s workshop. “In people, intelligence and autonomy go hand in hand, but in computers that’s not at all the case,” he said.
To regulate AI in the future, it makes sense to lay the foundation while humans are still at the wheel.Entire teams of people who have spent years studying the technology painstakingly build and manage the smartest AI systems. As Etzioni notes, AlphaGo can’t play its next round until someone pushes a button. But it’s human fallibility at the level of input and design that makes scholars and policy experts anxious. For machines to learn, they must be fed massive sets of data. And it’s humans, with all their inherent faults, who are doing the feeding.
Feeding the Machines
A recent White House report outlined the discriminatory potential of big data. To make sense of data, someone must categorize and profile it. Technologists and designers could be feeding existing prejudices and structural inequities into how the AI thinks.
It’s going to become increasingly important for some level of accountability to be applied to the data that’s fed into these systems.This is not an academic issue. Google’s ad-delivery algorithm sent more ads for higher-paying jobs to men than to women. And ProPublica recently reported that judges who made sentencing and parole decisions relied upon AI systems shown to be racially biased in making risk assessments.
“The journalists found that there was this real disparity between African Americans who were being labeled as potential recidivists versus white people,” said Microsoft researcher Kate Crawford. “This was a system that was producing bias in its very design, but we can’t see how it works. The system is proprietary. They haven’t shared the data. We don’t know why the system was getting these results.”
If AI will determine things like who gets a mortgage, a job, or parole, Crawford says, it will be increasingly important to apply some level of accountability for the data fed into these systems to ensure it is accurate.
How The Government Can Step In
Artificial intelligence is used for more than life choices and judicial outcomes. It’s also used to make immediate decisions about how, say, an autonomous car avoids a collision. The problem with trying to regulate these technologies is that they’re still being developed, says Bryant Walker Smith, a law professor at the University of South Carolina and one of the nation’s leading experts on self-driving cars.
Any kind of design requirements this early on could inhibit building a safer, more responsible machine, Smith says. That puts the onus on creators of autonomous vehicles to make the public safety case themselves.
Meanwhile, the government already is wrestling with how to regulate and oversee other forms of AI already in use, from drones to cancer-detection analytics. The White House’s Office of Science and Technology Policy is bringing several agencies together to craft an approach based on evidence, not anxiety. The issues the government will have to consider range from what the government will be able to buy and under what terms to fund research into making AI safer.
Still, even as the government plays catch-up with technology already at work in the world, it’s worth remembering that AI remains nascent. To regulate AI in the future, it makes sense to lay the foundation while humans are still at the wheel.
0 comments:
Post a Comment