Leigh Manson reports in Singularity Hub:
Devices should become what she calls “ready to hand.” Like a perfectly ergonomic hammer, our tools should act seamlessly with our hand or body movements, freeing our minds to pay attention to the task rather than the tool. “Interfaces are killing us.” We learn more efficiently when we use our whole bodies. Embodied cognition is the theory that cognition is not confined to the brain, and that the experiences of the body also affect our mental constructs and our performance on cognitive tasks.
Throughout history, we have used tools—hammers, canes, paintbrushes—as natural extensions of our bodies. These tools seem to disappear into our hands. But today’s electronic devices are unlike the simple tools of the past; they’re text-based and un-intuitive. They require us to multi-task: we must use the tool, via its interface, and we must simultaneously perform the task at hand.
For example, GPSs are incredible tools. But it’s possible to be lost even while holding a GPS. When holding it, both the device and the physical world command our attention. We’re doing two things at once, and that’s inefficient.
In other words, much of modern technology is “unready-to-hand,” according to Jody Medich. Medich is Director of Design for Singularity University Ventures, where she provides design and innovation direction for corporate and startup teams. She also builds builds robots and rockets in her spare time. In a talk at Singularity University’s Global Summit, she shared what she thinks the future of computer interfaces will look like.
For starters, she urges us to move away from training humans to operate devices. Rather, our devices should become what she calls “ready to hand.” Like a perfectly ergonomic hammer, our tools should act seamlessly with our hand or body movements, freeing our minds to pay attention to the task rather than the tool. “Interfaces are killing us,” she said.
A Solution in Extended Reality?
Extended reality (XR) offers this promise of an improved interface by making technology ready to hand, Medich said. XR is an umbrella term that encompasses both augmented reality (AR) and virtual reality (VR), although experts agree there’s some inconsistency in the way these terms are used. Clay Bavor, vice president of virtual and augmented reality at Google, thinks of AR and VR as two points on the spectrum of what he calls “immersive computing.”
Regardless of the semantics, Medich demonstrated that XR helps us remove clunky interfaces for a more effortless experience using technology. In an application for first responders, firefighters wear AR helmets with thermal imaging capabilities and toxicity sensors. The helmet doesn’t need an interface; firefighters see the thermal images as clearly as they see the physical world around them. “The ability to see in the types of environments that we work in is a game-changer for our industry,” said Tom Calvert, Menlo Park Fire Protection District battalion chief.
In the medical world, vein visualization is an AR technique that projects near-infrared light onto a patient’s skin, allowing medical professionals to “see through” patients’ skin in order to see their veins. No screen or external device is required—the surface of the patient’s skin effectively becomes the interface. AccuVein, one of the manufacturers of the vein visualization tool, claimed the product increased the likelihood of a successful first stick by 3.5 times, which contributes to increased patient satisfaction and decreased costs.
Medich noted that much of the work in augmented and virtual reality is happening in the medical sector. A medical research team from Duke University concluded that “there is evidence that VR can enhance the level of immersion in a distracting environment and that this occurs even when an individual is experiencing pain.”
We Learn With Our Entire Bodies
Medich mentioned another fact of human psychology that’s important for human-machine interface design: we tend to learn more efficiently when we use our whole bodies. Embodied cognition is the theory that cognition is not confined to the brain, and that the experiences of the body also affect our mental constructs and our performance on cognitive tasks.
For example, going for a walk sometimes clears our minds; the low-level physical exertion required for walking affects our ability to think. Or we might generate our best ideas in the shower, when our bodies are calmed by the sensation of water. But our screens do not involve the body—screens are visual, text-based, and flat. Extended reality, however, provides an opportunity to take advantage of embodied cognition because it offers an immersive experience and creates the perception that the entire body is engaged.
The Unfulfilled Promise of VR
For decades, VR evangelists have predicted that headsets, games, or other VR applications would take off, but the breakthrough hasn’t happened yet. In 2016, an MIT Technology Review article attributed the sluggish rise of VR to the high cost of devices like Oculus Rift and Touch controllers. Adam Rowe of Tech.co argued that for something to be cool, “it must feel new, yet old.” He noted that smartphones didn’t take off until the iPhone entered the scene, which improved upon but didn’t invent the smartphone.
Us + Invisible Interface = Superman?
Medich left the audience with a compelling thought about interface, again stressing her claim that the interface between ourselves and our tools should be as invisible as possible. Part of our construction of superheroes is that their powers come easily to them; Superman pressing buttons on a keypad to tap into his powers would not be Superman at all. Until we, too, can have our technologies effortlessly accessible to us, we have not truly taken advantage of their power. Bavor supports this view: “VR can transport you somewhere else. AR leaves you where you are… They both give you superpowers.”
0 comments:
Post a Comment