Horacio Rozanski comments in Knowledge@Wharton:
Technical prowess is only one factor in determining who will win the AI race. Before the technology can reach its potential - particularly in economic and national security applications - companies and policymakers will have to resolve the basic but difficult challenge of what we, as a society, should trust AI to do. AI will advance its adoption by building trust that it is safe, effective, ethical, accountable and transparent. To do so, we must be able to trust and verify that AI works, that it shares our values and that we understand how an algorithm arrives at its answers.
The exponential pace of technological advancement has made it more challenging than ever before to address its unintended consequences. We have seen it with digital innovation, especially social media, and we are only just beginning to contemplate it with artificial intelligence, which holds the promise of being the most transformational technology of the information era.
The unanticipated consequences of digital innovation have been well documented. Live-streaming of unspeakable violence and other crimes. Proliferation of conspiracy theories. Addiction that alters kids’ brains and emotional health. Groupthink that worsens political and societal divisions. The promotion of cultural tribalism, intolerence and social divisiveness. Foreign manipulation of social media that undermines electoral integrity. All of these developments have degraded our trust in digital technologies.
While it is vital that technology companies and policymakers address these issues through a combination of self-regulation and government mandates, there is an additional, related step that we, as a country, should take: Now is the time to anticipate the broad implications of the next frontier of innovation, artificial intelligence. In particular, it is urgent that we address now what AI can be trusted to do and how it might be misused.
The United States, China, and other global powerhouses are in a race to develop and apply AI in ways that advance economic and national security interests. Although the United States appears to have the edge for now, we are not significantly ahead. Investment is pouring into the hardware, software and talent required to compete in this space in the medium to long-term. According to Tractica, a market research firm that focuses on technology, global investment in artificial intelligence is expected to grow to more than $300 billion by 2025, powered by early successes and unbridled optimism.
“Before the technology can reach its potential … companies and policymakers will have to resolve the basic but difficult challenge of what we, as a society, should trust AI to do.”But technical prowess is only one factor in determining who will win the AI race. Before the technology can reach its potential — particularly in economic and national security applications in the United States — companies and policymakers will have to resolve the basic but difficult challenge of what we, as a society, should trust AI to do. This is central to the formulation of a national strategy for artificial intelligence. Above all, the strategy must build trust in this transformative technology. There are three essential paths to success.
First, we must trust that AI works — that the mathematical algorithms at the foundation of all AI today will do their jobs right. This requires massive amounts of labeled data and time to train the algorithms. It also suggests that, at least in the early stages of adoption, the user of an algorithm should be involved in building it. For example, an intelligence analyst who is accustomed to seeing all the data and then making an assessment may need, initially at least, to become invested in the process of teaching the algorithm to see what she sees — reinforcing correct answers and fixing mistakes along the way. When the user sees that artificial intelligence actually did improve the analysis, she will trust that it is doing its job right.
At the National Geospatial Intelligence Agency (NGA), a major effort is underway to use algorithms for change detection in imagery, often one of the first steps in an analysis process that is designed to provide strategic warning to policymakers. According to Sue Kalweit, Director of the Analysis Directorate at NGA, pilot programs have proven the effectiveness of algorithms in this use case. The NGA is working to scale it for enterprise-wide use, and as computer vision technology improves, there is an expectation it will continue to help intelligence analysis — beyond change detection to functions including object detection, identification, classification, and, ultimately, contextualization.
One of the questions Kalweit says she gets most frequently is about the willingness of NGA analysts to accept that AI can do work that they have been doing for decades. “I can tell you emphatically that any technology that’s going to help cue them to where to look or provide an indicator that something of interest that they care about is in this image, they want that technology,” she says. Analysts are embracing it because the use of AI in pilot programs has freed them to do higher-level congnitive work sooner. They are charged with providing insight, not just information, and this technology allows them to focus on the meaning of the change that the algorithm has detected, rather than spending hours and hours scanning through reams of images to spot a change.
“Now is the time for experts … to agree on a national strategy for AI that will advance its adoption by building in trust that it is safe, effective, ethical, accountable and transparent.”A second essential path to building trust in AI is ensuring that it shares our values — that it will reliably behave as we would under the same circumstances. The public and private sectors in the United States are beginning to get serious about defining an ethical framework for AI, including how to best protect, organize, and use the data it is scrutinizing. For example, researchers are working on solutions to control bias in algorithms.
At the same time, policymakers are signaling that they may step in. Democratic Senators Cory Booker and Ron Wyden recently introduced legislation that would require companies to audit AI systems for bias and correct anything that may be producing skewed results. It would direct the Federal Trade Commission to write rules for “highly sensitive” systems and require assessments of those systems plus security and privacy protections for the data fed into them. Clearly, much more work needs to be done in such areas if we, as a society, are going to trust that AI shares our values.
Finally, as President Reagan taught us, to truly trust, we must verify. This means we must understand how an algorithm arrives at its answers. Sadly, researchers in academia and industry already have begun to identify ways in which AI can be tricked into “seeing” the wrong things. For example, an AI can be corrupted during training, so that a specifically placed sticker that would not fool a human could make an autonomous vehicle see a speed limit sign instead of a stop sign, with potentially catastrophic results. These new “attack surfaces” (as avenues of risk are described in cyber-security), must be understood and mitigated before the technology is operational. Greater transparency in the rationale for AI decisions will be crucial to understanding whether a mistake, especially a tragic one, is due to a bad algorithm, a bad actor or unavoidable bad luck. Work already underway in making AI explainable must be accelerated if the technology is to become a ubiquitous tool of everyday life and of national security. In short, we must be able to trust and verify.
There is a window of opportunity today to make progress in these areas. The U.S. government is moving more quickly to adopt artificial intelligence than many would have expected only two years ago. At the same time, policymakers are coalescing around the need for a national AI strategy, even as they reassure us that on matters of life and death humans will always be in the loop.
Given how quickly the technology is advancing, however, it is not hard to foresee the day — in the not-too-distant future — when a cyber intrusion or attack on critical systems happens so fast that the response must come at computer speed. There may be no time for human decision-making. At that moment, trust in the technology will be imperative. It must be built in at the design phase. Now is the time for experts in the business, technology, scientific, legal, and policymaking community to agree on a national strategy for AI that will advance its adoption by building in trust that it is safe, effective, ethical, accountable and transparent.
0 comments:
Post a Comment