A Blog by Jonathan Low

 

Sep 24, 2013

Is Google Destroying Our Minds - or Is It Even Weirder Than That?

We're losing our minds! Google and Apple and all of the other purveyors and enablers and enhancers are sucking the intelligence right out of our heads. We no longer have to think or remember or conceive...anything. It's all right there at the click of pad or a mouse. No wonder the machines are taking over, they're better programmed than we are.

Yeah, so, that's the line, anyway. The meme that launched a thousand magazine covers.

But, as the following article explains, its far more complicated than that. Turns out we humans have never been particularly good at remembering much of anything, especially details. Our coping mechanism has been the ability to tap into those around us using a capacity called transactive memory. Though study has generally produced positive outcomes for those who engage in it, there is a reason why organizations have been encouraged to sponsor group learning and group working: they're more effective. And part of the reason is that groups do a better job of retaining and interpreting information that the entire group can use.

What is interesting is that technology is now mimicking those functions in ways that support and enhance the processes we adapted more or less by chance.

So, fear not. Every time your first instinct is to look something up on your smartphone it is not a surrender of some genetic legacy, it is actually just a technological extension of a process that began long before you were born, let alone before you upgraded your device. JL

Clive Thompson comments in Slate:

Frankly, our brains have always been terrible at remembering details.
Is the Internet ruining our ability to remember facts? If you’ve ever lunged for your smartphone during a bar argument (“one-hit father of twerking pop star”—Billy Ray Cyrus!), then you’ve no doubt felt the nagging fear that your in-brain memory is slowly draining away. As even more fiendishly powerful search tools emerge—from IBM’s Jeopardy!-playing Watson to the “predictive search” of Google Now—these worries are, let’s face it, only going to grow.
So what’s going on? Each time we reach for the mouse pad when we space out on the ingredients for a Tom Collins or the capital of Arkansas, are we losing the power to retain knowledge?
The short answer is: No. Machines aren’t ruining our memory.
The longer answer: It’s much, much weirder than that!
* * *
What’s really happening is that we’ve begun to fit the machines into an age-old technique we evolved thousands of years ago—“transactive memory.” That’s the art of storing information in the people around us. We have begun to treat search engines, Evernote, and smartphones the way we’ve long treated our spouses, friends, and workmates. They’re the handy devices we use to compensate for our crappy ability to remember details.
We’re good at retaining the gist of the information we encounter. But the niggly, specific facts? Not so much. In a 1990 study, long before the Interwebs supposedly corroded our minds, the psychologist Walter Kintsch ran an experiment in which subjects read several sentences. When he tested them 40 minutes later, they could generally remember the sentences word for word. Four days later, though, they were useless at recalling the specific phrasing of the sentences—but still very good at describing the meaning of them.
The exception is when you’re obsessed with a subject. If you’re deeply into something—football, the Civil War, Pokémon—then you’re usually great at hoovering up and retaining details. When you’re an expert in a subject, you can retain new factoids on your favorite topic easily. This only works for the subjects you’re truly passionate about, though. Baseball fans can reel off stats for their favorite players, then space out on their own birthday.
So humanity has always relied on coping devices to handle the details for us. We’ve long stored knowledge in books, paper, Post-it notes.
But when it comes to quickly retrieving information on the fly, all day long, quickly? We don’t rely on documents for the details as much as you’d think. No, we rely on something much more immediate: other people.
130918_SCI_SmarterThanYouThink
Harvard psychologist Daniel Wegner—and his colleagues Ralph Erber and Paula Raymond—first began to systematically explore “transactive memory” back in the ’80s. Wegner noticed that spouses often divide up memory tasks. The husband knows the in-laws' birthdays and where the spare light bulbs are kept; the wife knows the bank account numbers and how to program the TiVo. If you ask the husband for his bank account number, he'll shrug. If you ask the wife for her sister-in-law's birthday, she can never remember it. Together, they know a lot. Separately, less so.
Wegner suspected this division of labor takes place because we have pretty good "metamemory." We're aware of our mental strengths and limits, and we're good at intuiting the memory abilities of others. Hang around a workmate or a romantic partner long enough and you discover that while you're terrible at remembering your corporate meeting schedule, or current affairs in Europe, or how big a kilometer is relative to a mile, they're great at it. They’re passionate about subject X; you’re passionate about subject Y. So you each begin to subconsciously delegate the task of remembering that stuff to the other, treating one’s partners like a notepad or encyclopedia, and they do the reverse. In many respects, Wegner noted, people are superior to notepads and encyclopedias, because we’re much quicker to query: Just yell a fuzzily phrased question across to the next cubicle (where do we keep the thing that we use for that thing?) and you’ll get an answer in seconds. We share the work of remembering, Wegner argued, because it makes us collectively smarter.
Experiments have borne out Wegner's theory. One group of researchers studied older couples who'd been together for decades. When separated and questioned individually about the events of years ago, they'd sometimes stumble on details. But questioned together, they could retrieve them. How? They’d engage in "cross-cuing," tossing clues back and forth until they triggered each other. This is how a couple remembered a show they saw on their honeymoon 40 years previously:
F: And we went to two shows, can you remember what they were called?
M: We did. One was a musical, or were they both? I don't ... no ... one ...
F: John Hanson was in it.
M: Desert Song.
F: Desert Song, that's it, I couldn't remember what it was called, but yes, I knew John Hanson was in it.
M: Yes.
They were, in a sense, Googling each other. Other experiments have produced similar findings. In one, people were trained in a complex task—assembling an AM/FM radio—and tested a week later. Those who'd been trained in a group and tested with that same group performed far better than individuals who worked alone; together, they recalled more steps and made fewer mistakes. In 2009 researchers followed 209 undergraduates in a business course as they assembled into small groups to work on a semester-long project. The groups that scored highest on a test of their transactive memory—in other words, the groups where members most relied on each other to recall information—performed better than those who didn't use transactive memory. Transactive groups don’t just remember better: They also analyze problems more deeply, too, developing a better grasp of underlying principles.
We don't remember in isolation—and that's a good thing. "Quite simply, we seem to record as much outside our minds as within them," as Wegner has written. "Couples who are able to remember things transactively offer their constituent individuals storage for and access to a far wider array of information than they would otherwise command." These are, as Wegner describes it in a lovely phrase, "the thinking processes of the intimate dyad."
And as it turns out, this is what we’re doing with Google and Evernote and our other digital tools. We’re treating them like crazily memorious friends who are usually ready at hand. Our “intimate dyad” now includes a silicon brain.
Recently, a student of Wegner’s—the Columbia University scientist Betsy Sparrow—ran some of the first experiments that document this trend. She gave subjects sentences of random trivia (like "An ostrich's eye is bigger than its brain" and "The space shuttle Columbia disintegrated during reentry over Texas in Feb. 2003.") and had them type the sentences into a computer. With some facts, the students were explicitly told the information wouldn't be saved. With others, the screen would tell them that the fact had been saved, in one of five blandly named folders, such as FACTS, ITEMS, or POINTS. When Sparrow tested the students, the people who knew the computer had saved the information were less likely to personally recall the info than the ones who were told the trivia wouldn't be saved. In other words, if we know a digital tool is going to remember a fact, we're slightly less likely to remember it ourselves.
We are, however, confident of where in the machine we can refind it. When Sparrow asked the students simply to recall whether a fact had been saved or erased, they were better at recalling the instances where a fact had been stored in a folder. As she wrote in a Science paper, "believing that one won't have access to the information in the future enhances memory for the information itself, whereas believing the information was saved externally enhances memory for the fact that the information could be accessed." Each situation strengthens a different type of memory. Another experiment found that subjects were really good at remembering the specific folder names containing the right factoid, even though the folders had extremely unremarkable names.
"Just as we learn through transactive memory who knows what in our families and offices, we are learning what the computer 'knows' and when we should attend to where we have stored information in our computer-based memories," Sparrow wrote.
You could say this is precisely what we most fear: Our mental capacity is shrinking! But as Sparrow pointed out to me when we spoke about her work, that panic is misplaced. We’ve stored a huge chunk of what we “know” in people around us for eons. But we rarely recognize this because, well, we prefer our false self-image as isolated, Cartesian brains. Novelists in particular love to rhapsodize about the glory of the solitary mind; this is natural, because their job requires them to sit in a room by themselves for years on end. But for most of the rest of us, we think and remember socially. We’re dumber and less cognitively nimble if we're not around other people—and, now, other machines.
In fact, as transactive partners, machines have several advantages over humans. For example, if you ask them a question you can wind up getting way more than you’d expected. If I’m trying to recall which part of Pakistan has experienced tons of U.S. drone strikes and I ask a colleague who follows foreign affairs, he'll tell me "Waziristan." But when I queried this online, I got the Wikipedia page on "Drone attacks in Pakistan." I wound up reading about the astonishing increase of drone attacks (from one a year to 122 a year) and some interesting reports about the surprisingly divided views of Waziristan residents. Obviously, I was procrastinating—I spent about 15 minutes idly poking around related Wikipedia articles—but I was also learning more, reinforcing my generalized, “schematic” understanding of Pakistan.
Now imagine if my colleague behaved like a search engine—if, upon being queried, he delivered a five-minute lecture on Waziristan. Odds are I'd have brusquely cut him off. "Dude. Seriously! I have to get back to work." When humans spew information at us unbidden, it's boorish. When machines do it, it’s enticing. And there are a lot of opportunities for these encounters. Though you might assume search engines are mostly used to answer questions, some research has found that up to 40 percent of all queries are acts of remembering. We're trying to refresh the details of something we've previously encountered.
If there’s a big danger in using machines for transactive memory, it’s not about making us stupider or less memorious. It’s in the inscrutability of their mechanics. Transactive memory works best when you have a sense of how your partners' minds work—where they're strong, where they're weak, where their biases lie. I can judge that for people close to me. But it's harder with digital tools, particularly search engines. They’re for-profit firms that guard their algorithms like crown jewels. And this makes them different from previous forms of transactive machine memory. A public library—or your notebook or sheaf of papers—keeps no intentional secrets about its mechanisms. A search engine keeps many. We need to develop literacy in these tools the way we teach kids how to spell and write; we need to be skeptical about search firms’ claims of being “impartial” referees of information.
What’s more, transactive memory isn’t some sort of cognitive Get Out of Jail Free card. High school students, I’m sorry to tell you: You still need to memorize tons of knowledge. That’s for reasons that are civic and cultural and practical; a society requires shared bodies of knowledge. And on an individual level, it’s still important to slowly study and deeply retain things, not least because creative thought—those breakthrough ahas—come from deep and often unconscious rumination, your brain mulling over the stuff it has onboard.
But you can stop worrying about your iPhone moving your memory outside your head. It moved out a long time ago—yet it’s still all around you.

0 comments:

Post a Comment