The question is whether this has inadvertently coarsened all human interactions. JL
Ken Gordon reports in The Atlantic:
It’s too soon to assess what speech-driven interfaces will do to children. But using one’s voice to get what one wants feels qualitatively different from silently inputting commands on a keyboard. Vocalizing one’s authority can be problematic, if done repeatedly and unreflectively—and today’s chatbots and digital assistants encourage a lot more repetition than reflection.
When I was a kid, in the early 1980s, I programmed a little in a language called BASIC. Recalling that long-ago era, I see myself, bowl cut and braces, tapping at the keyboard of some ancient computer:
10 PRINT “[Whatever]”And when I hit “return,” up jumps a digital column of whatever I’d entered between the quotation marks to fill the screen:
20 GOTO 10
[Whatever]And so on. Later in my life, there were more advanced computing experiences—my parents eventually got me a TI-99/4A with Extended BASIC—but 20 GOTO 10 lingers. Those early days at the computer enabled me, for the first time, to issue commands. I was—suddenly, shockingly—a person to be obeyed. My commands didn’t carry any grand force, as do commands in, say, a military context, but issuing them did make me happy. The Nobel laureate Elias Canetti described the dynamic well some 60 years ago in Crowds and Power:
[Whatever]
[Whatever]
The power of those who give commands appears to grow all the time. Every command, however trivial, adds something to it, not only because in practice it generally benefits the person who gives it, but because, by the very nature of commands—their knife-edged precision and the recognition they exact in the whole sphere they traverse—it tends in every way to augment and secure his power.Today, the power differential has changed. My own son, Ari, is 13. Ari’s a far more skilled computer user than I could ever hope to be—and he has access to extremely sophisticated equipment.Ari makes me think about the future of computers, as technology moves away from the keyboard-and-monitor model of computing. Consider the Amazon Echo, a specimen of which is playing the audio version of Carrie Fisher’s The Princess Diarist in the other room as I type. For all its magical qualities, the Echo—or Alexa, to give the name the device responds to—is an imperfect interface. Alexa often has us repeating ourselves, but we forgive her because the very idea of conversing with a computer is still a wonderful novelty. Voice-activated computing is at an adolescent stage, which is fitting for my newly teenaged son.
“Alexa, play Jeopardy!,” he might say—and his word is her command.
And that gives me pause. My wife and I have expended much time and energy ensuring that when Ari speaks, he does so respectfully and intelligently. But he can speak to Alexa without any consideration at all. “Please” or “thank you” are never involved. In fact, polite words would just get in the way.
Of course, there was no “please” or “thank you” in my BASIC computations. But then, my programming was written—silently and solitarily. Alexa makes the command-based nature of computers audible. The device lives on the table where Ari, his mother, sister, and I eat every day. We talk at her all the time.
Kids who live with Alexa or other smart speakers have access to a digital genie. What might be the consequences of giving a child this voice-activated magic lamp, one with no limit to wishes and no consequences for exceeding the allotted amount?Commands, as Canetti suggests, usually sting their recipients—it’s a sting that “sinks deeply into the person who has carried out the command and remains in him unchanged.” With Alexa, there’s no sting at all. I wonder if this crucial absence could, under certain circumstances, grow into an empathic blind spot.
Traditionally speaking, kids are too overwhelmed by commands to deliver any of their own. “Those most beset by commands are children,” writes Canetti. “It is a miracle that they ever survive the pressure and do not collapse under the burden of the commands laid on them by their parents and teachers.” For Ari, commanding Alexa is a regular part of life. I do it myself sometimes. Alexa is there, waiting for us to tell her what to do and to obey. He is—we all are—Alexa’s master.
Ari is 13, and mature enough to know the difference between a human and a computer interface programmed to sound like one. But as his dad, I want him to use his voice to create real human dialogue, of the sort the 20th-century Jewish philosopher Martin Buber proposed in his book I and Thou. Buber says that when people speak, they employ one of two essential dispositions (Buber calls them “basic words”): “I-It” and “I-You.” These are two different attitudes an “I” can take when speaking. The former is transactional; the latter is relational. As he writes in I and Thou: “When I confront a human being as my You and speak the basic word I-You to him, then he is no thing among things nor does he consist of things.” When people use I-You language, it’s about relating in the deepest sense, as opposed to using it as a means to some sort of end. This “I-You” relation is an area of meaningful connection, but throwing commands at Alexa habituates people to speaking “I-It” language out loud.
Now, I could be overreacting. Maybe speaking to Alexa is just programming by another means. It’s too soon to assess what, if anything, speech-driven interfaces will do to children (mine or anyone else’s). But to me, using one’s voice to get what one wants feels qualitatively different from silently inputting commands on a keyboard. Vocalizing one’s authority can be problematic, if done repeatedly and unreflectively—and today’s chatbots and digital assistants encourage a lot more repetition than reflection.
0 comments:
Post a Comment