John McQuaid reports in Scientific American:
Emotion AI or affective computing combines cameras and other devices with AI programs to capture facial expressions, body language, vocal intonation, and other cues. The goal is to go beyond facial recognition and identification to reveal something previously invisible to technology: the inner feelings, motivations and attitudes of the people in the images. Systems that read cues of feeling, character and intent are being used or tested to detect threats at border checkpoints, evaluate job candidates, monitor classrooms for boredom or disruption, and recognize signs of aggressive driving.In Liverpool, England, at a February 2020 conference on the rather unglamorous topic of government purchasing, attendees circulated through exhibitor and vendor displays, lingering at some, bypassing others. They were being closely watched. Around the floor, 24 discreetly positioned cameras tracked each person’s movements and cataloged subtle contractions in individuals’ facial muscles at five to 10 frames per second as they reacted to different displays. The images were fed to a computer network, where artificial-intelligence algorithms assessed each person’s gender and age group and analyzed their expressions for signs of “happiness” and “engagement.”
About a year after the Liverpool event, Panos Moutafis, CEO of Austin, Tex.–based Zenus, the company behind the technology, was still excited about the results. “I haven’t seen lots of commercial systems getting this level of accuracy,” he said to me during a video call, showing me a photograph of the crowd, the faces outlined with boxes. Zenus engineers had trained the system to recognize emotions by having it examine a huge data set of facial expressions with labels describing relevant feelings. The company validated the program’s performance in various ways, including live tests when people reported how they felt when an image was taken. The system, Moutafis said, “works indoors, it works with masks, with no lighting, it works outdoors when people wear hats and sunglasses.”
The Zenus setup is one example of a new technology — called emotion AI or affective computing — that combines cameras and other devices with artificial-intelligence programs to capture facial expressions, body language, vocal intonation, and other cues. The goal is to go beyond facial recognition and identification to reveal something previously invisible to technology: the inner feelings, motivations and attitudes of the people in the images. “Cameras have been dumb,” says A.C.L.U. senior policy analyst Jay Stanley, author of the 2019 report The Dawn of Robot Surveillance. “Now they’re getting smart. They are waking up. They are gaining the ability not just to dumbly record what we do but to make judgments about it.”
Emotion AI has become a popular market research tool — at another trade show, Zenus told Hilton Hotels that a puppies-and-ice-cream event the company staged was more engaging than the event’s open bar — but its reach extends into areas where the stakes are much higher. Systems that read cues of feeling, character and intent are being used or tested to detect threats at border checkpoints, evaluate job candidates, monitor classrooms for boredom or disruption, and recognize signs of aggressive driving. Major automakers are putting the technology into coming generations of vehicles, and Amazon, Microsoft, Google and other tech companies offer cloud-based emotion-AI services, often bundled with facial recognition. Dozens of start-ups are rolling out applications to help companies make hiring decisions. The practice has become so common in South Korea, for instance, that job coaches often make their clients practice going through AI interviews.
0 comments:
Post a Comment