A Blog by Jonathan Low

 

Jul 9, 2015

Will Gestures Be the Interface for the Internet of Things?

Using instincts developed genetically over ages and honed recently is a smart way to enhance convenience, speed - and productivity (plus sales where appropriate) - while lowering costs associated with training and consumer frustration. JL

Paul Daugherty, Olof Schybergson and James Wilson report in Harvard Business Review:

We have become familiar with gesture technology whether we realize it or not. From smartphone screens (pinch to shrink and sweep to scroll) to infrared sensors in bathroom faucets (don’t hold your hands too close), we have been trained to interact with “smart” objects.
The human body interacts with the physical world in subtle and sophisticated ways. Our eyes see a rainbow of color, our ears hear a range of frequencies, and our hands are great for grabbing whichever tool our creative brains can invent. But today’s technology can sometimes feel like it’s out of sync with our senses as we peer at small screens, flick and pinch fingers across smooth surfaces, and read tweets “written” by programmer-created bots. These new technologies can increasingly make us feel disembodied.
As people and companies prepare to adapt to the Internet of Things (IoT), with its ever-widening focus on machine-to-machine (M2M) communication, it’s a good time to ask where people will fit in. What will future “H2M”— human-to-machine — interactions look like in a world where physical objects are more networked than ever and are even having their own “conversations” around us?
One answer is gestures.
We have become familiar and even comfortable with gesture technology whether we realize it or not. From smartphone screens (pinch to shrink and sweep to scroll) to infrared sensors in bathroom faucets (don’t hold your hands too close), we have been trained to interact with “smart” objects in particular ways.
Gesture technology is important because when it is implemented correctly, it becomes something we barely think about. It can make basic interactions with everyday objects simple and intuitive.
In our research into a discrete class of offerings we call Living Services — that is, new IoT-based services delivered to people in the context of their everyday lives — we note several ways companies are testing uses for gesture. For example, Ericcson is experimenting with a product it calls Connected Paper to turn the human body into a sort of antenna. The product prints a circuit or tiny battery directly on packaging. When the circuit is touched, product information is transmitted through each uniquely identified person’s body to the phone in his or her hand. By touching a container of soup, a diabetic might see detailed information on ingredients and additives like sugar, helping her to make better decisions about what to eat.
Gesture-based technology is already being incorporated into product design by companies such as Reemo, which produces what is effectively a wrist-worn mouse. The device enables users to gesture-control connected objects ranging from lights to kitchen appliances to blinds to computer hardware.
Another promising development is Google’s Project Soli, which turns hand gestures into virtual controls — without need for a virtual reality headset. Instead a tiny sensor that fits onto a chip tracks at high speed and accuracy not just big arm-waving gestures but also sub-millimeter hand movements, using radar that allows the control of electronic devices without touching them.
Yet, as gestures become a more regular feature of connected devices, so too does the issue of “gesture conflict.” As the technology proliferates, we could see an increasing number of confusing, chaotic experiences where movements trigger unplanned actions.
Consider two broad areas of potential conflict. One is between the major technology platform owners. There is currently no standard format for body-to-machine gestures. Who will be the first to “own,” for instance, the hand gesture for a command as simple as “stop”? Without standards, a gesture that works with Spotify at home might not work in your minivan, where the vehicle manufacturer’s gestures must be used instead.
A second conflict could occur because of the fact that the gestures that are effective in one culture may not work in another. The gesture of an open palm could be used to initiate a payment, for example, but in Arab countries it would symbolize begging. Companies must be aware of these differences as they seek to sell their products and designs globally.
New approaches to human-machine interactions must be simple and natural to flourish. They should also be open. Many “wearables” currently available work only in closed ecosystems, where it’s difficult to share or analyze data across platforms.Consumers may well resist a world in which they must choose between open and closed systems. And their frustration could slow down the uptake of these services, a potential loss for companies and users alike.
Open systems offer designers the ability to build around emerging standards. This will be critical for the sustainable growth of these technologies. The Withings Wi-Fi Body Scale is an example of an open system: the device, which measures consumers’ weight and body fat, syncs with third-party fitness app MyFitnessPal in order to automatically update the user’s weight in the app.
It’s still early for organizations designing for connected devices, and so we should expect some bumps as companies improve interfaces to better fit with the way people think and act. With openness comes a broader set of possibilities for gestures and ways to interact. Let the best gestures win.

0 comments:

Post a Comment