Kyle Wiggers reports in Venture Beat:
The end game is to achieve a paradigm shift from AI that can perform basic inference to AI capable of contextual reasoning — systems that, in essence, can come to accurate conclusions in situations they’ve never encountered.“We feel that if we can make the interactions between humans and machines more symmetric, we can have machines become more effective partners in whatever endeavor we may tackle. It’s the foundation that starts to enable other types of applications.”
We owe a lot to the Defense Advanced Research Projects Agency (DARPA), a division of the U.S. Department of Defense responsible for the development of emerging technologies. The 60-year-old agency proposed and prototyped the precursor to the world wide web. It developed an interactive mapping solution akin to Google Maps. And its personal assistant — Personal Assistant That Learns (PAL) — predated Apple’s Siri and Amazon’s Alexa by decades.
But it’s also one of the birthplaces of machine learning, a kind of artificial intelligence (AI) that mimics the behavior of neurons in the brain. Dr. Brian Pierce, director of DARPA’s Innovation Office, spoke about the agency’s recent efforts at VB Summit 2018.
One area of study is so-called “common sense” AI — AI that can draw on environmental cues and an understanding of the world to reason like a human. Concretely, DARPA’s Machine Common Sense Program seeks to design computational models that mimic core domains of cognition: objects (intuitive physics), places (spatial navigation), and agents (intentional actors).
“You could develop a classifier that could identify a number of objects in an image, but if you ask a question, you’re not going to get an answer,” Pierce said. “We’d like to get away from having an enormous amount of data to train neural networks [and] get away with using fewer labels [to] train models.”
The agency is also pursuing explainable AI (XAI), a field which aims to develop next-generation machine learning techniques that explain a given system’s rationale.
“[It] helps you to understand the bounds of the system, which can better inform the human user,” Pierce said.
More broadly, the agency recently pledged to invest $2 billion in AI over the next five years — or about $400 million a year — as part of its AI Next initiative.
Anyone can participate in DARPA-funded programs by responding to an invitation for proposals on fbo.gov. Its $3 billion-a-year annual budget funds projects overseen by roughly 100 program managers (PMs), who retain in-house and contracted talent.
DARPA has spurred advancements in AI in part through competition. Over 10 years ago, in 2004, DARPA kicked off the Grand Challenge, which tasked teams of engineers and researchers with completing a 132-mile course in the Nevada desert with an autonomous car. The subsequent Urban Challenge had those cars navigate a landscape designed to replicate an urban landscape.
Among other ongoing challenges are the DARPA Robotics Challenge, which pits teams developing control and perception algorithms (plus interfaces) against each other, and the Cyber Grand Challenge, a competition to create automatic defensive systems.
The end game, Pierce said, is to achieve a paradigm shift from AI that can perform basic inference to AI capable of contextual reasoning — systems that, in essence, can come to accurate conclusions in situations they’ve never encountered.
“We feel that if we can make the interactions between humans and machines more symmetric, we can have machines become more effective partners in whatever endeavor we may tackle,” Pierce told VentureBeat earlier this month. “It’s the foundation that starts to enable other types of applications.”
0 comments:
Post a Comment