The difference, in some cases as the following article explains, is that the technology may decide for you rather than asking your opinion - or permission.
The technology is already available and being widely tested. The question, as so often happens with interpretive algorithms (your phone assuming it knows what word you are trying to text being a crude example) is that it may not actually get your desires right. Of greater concern is that while claiming to provide consumers with greater 'efficiency,' the purveyors of these systems are limiting choice by imposing selections that may be to their economic advantage rather than the customer's. JL
Jessi Hempel reports in Wired:
Tools to make technology capable of interpreting feelings based on tracking your facial expression.
Sometime next summer, you’ll be able to watch a horror series that is exactly as scary as you want it to be—no more, no less. You’ll pull up the show, which relies on software from the artificial intelligence startup Affectiva, and tap a button to opt in. Then, while you stare at your iPad, its camera will stare at you.
The software will read your emotional reactions to the show in real time. Should your mouth turn down a second too long or your eyes squeeze shut in fright, the plot will speed along. But if they grow large and hold your interest, the program will draw out the suspense. “Yes, the killing is going to happen, but whether you want to be kept in the tension depends on you,” says Julian McCrea, founder of the London-based studio Portal Entertainment, which has a development deal with a large unidentified entertainment network to produce the series. He calls Affectiva’s face-reading software, Affdex, “an incredible piece of technology.”
McCrea is one of the first outside developers to experiment with Affectiva’s developer tools to make technology capable of interpreting feelings based on tracking your facial expression. Scientists Roz Picard and Rana el Kaliouby spun the Waltham, Massachusetts-based tech startup out of MIT Media Lab in 2009. Picard has since left the company, but El Kaliouby, 36, remains the chief science officer and is committed to a bigger vision: “Personally, I’m not going to stop until this tech is embedded in all of our lives.” Already, CBS has used it to determine how new shows might go down with viewers. And during the 2012 Presidential election, Kaliouby’s team experimented with using it to track a sample of voters during a debate.
With $20 million in venture funding, the company has so far worked closely with a few partners to test its commercial applications. Now it plans to open its tools to everyone. Starting today, Affectiva will invite developers to experiment with a 45-day free test and then license its tools. You remember Intel inside? El Kaliouby envisions “Affectiva-embedded” technology, saying, “It’ll sit on your phone, in your car, in your fridge. It will sense your emotions and adapt seamlessly without being in your face.” It will just notice everything that’s happening on your face. She’ll expand on her strategy May 19 at the LDV Vision Summit, a coming together of some of the smartest companies cracking the problem of machine vision, in New York.
Millions of Faces
El Kaliouby has a PhD in computer science from Cambridge University, completed a post-doc at MIT Media Lab, and built Affectiva’s core technology as part of her academic work, intending to use it to help children with autism. “As I was doing that we started getting a lot of interest from industry,” says el Kaliouby. “The autism research was limited in scope,” she explained, so she turned to the business world to have a greater impact.
Affdex, the company’s signature software, builds detailed models of the face, taking into account the crinkle of the skin around the eye when you smile or the dip in the corner of your bottom lip when you frown. Since el Kaliouby started working on the Affectiva algorithms, the software has logged 11 billion of these data points, taken from 2.8 million faces in 75 countries.
With its massive data set, el Kaliouby believes Affectiva has developed an accurate read on human emotions. The software can, in effect, decode feelings. Consider Affectiva’s take on tracking empathy: “An example would be the inner eyebrow rise,” says el Kaliouby. “Like when you see a cute puppy and you’re, like, awww!” It can even note when you are paying attention.
The software relies on a so-called Facial Action Coding System, a taxonomy of 46 human facial movements that can be combined in different arrays to identify and label emotions. When it was developed in the late 1970s, humans scored emotional states manually by watching the movement of facial muscles. It was time intensive. “It takes about five minutes to code one minute of video,” says el Kaliouby. “So we built algorithms that automate it.” The software had to be trained to recognize variety in expressions. My smirk, for example, might not look like your smirk. “It’s like training a kid to recognize what an apple is,” el Kaliouby says.
Smile!
Five years in, the technology has become robust enough to be reliably useful. Experience designer Steve McLean, for example, who runs the Wisconsin design firm Wild Blue Technologies, has used Affectiva to build a video display for Hershey to use in retail stores. If you smile at the screen, the display dispenses a free chocolate sample. Tech startup OoVoo, which competes with Skype, has integrated the software into its videochat to create a product called intelligent video that can read chatters’ emotions. “We’re looking at focus groups, online education, and political affinity,” says JP Nauseef, managing director of Myrian Capital, which invested in both Affectiva and OoVoo and sits on Affectiva’s board.
But for all of Affectiva’s potential, it will take more than creative developers to help its technology catch on more broadly. “The hidden discussion that hasn’t been brought up is trust,” says Charlene Li, CEO of the research outfit Altimeter Group, who has followed Affectiva closely since 2011. “I love the product, but I’m also terrified by it,” she says. She points out that should this data fall into the wrong hands, it could be dangerous for consumers. What happens, for example, if you are often sad while using a piece of Affectiva-embedded software and the software’s developer chooses to sell that information to a pharmaceutical company?
It’s a concern that el Kaliouby takes very seriously. “We actually don’t store any personal information about the consumers, so we do not have any way of tying back the facial video to an individual,” she says. “We have 2.78 million face videos in the platform, and if your face was in there, none of our team would be able to pull it out for you.”
That may be so, but as the company makes its tools available to a broader set of developers, it will have to monitor how the software is rolled out to prevent them from abusing it—and to make sure that as users interact with it for the first time, they’re aware of it and feel they are in control of the experience.
The technology may be very good at reading your emotions. But humans will have to take care how to act on them.
0 comments:
Post a Comment