In the meantime, life has continued (and not having internet, wifi or mobile connections seems to matter a lot less once you give into it). As the following article suggests, our desperate desire to make computers seem more human - and, presumably, therefore - less threatening - may be less accurate than we'd like - or hope. JL
Adrienne LaFrance reports in The Atlantic:
Humans can’t really unlearn things neatly. Because we don’t know how to untangle what we see and how we see it in the first place. You might forget a fact or lose a skill you once had, but there’s no way to map—and therefore no way to deliberately refine—the ways in which exposure to certain inputs has altered your perceptions. Machines, however, can unlearn.
Humans have a tendency to rely on machines as a way of understanding ourselves. The mechanical world has long provided metaphors for how the human body works.
“We’ve always had technological analogies to try to explain biology,” said Chris Atkeson, a roboticist at Carnegie Mellon University. “One idea of how the brain worked was it’s hydraulic. People described hydraulic clocks and the heart pumping blood. Then we had steam engines as a metaphor for how [our bodies] worked. Then we got electricity.”
Expanding upon this tradition, in 1948, the mathematician and philosopher Norbert Wiener published his book Cybernetics, which used computer-brain analogies to lay the foundation for how people now think about the Information Age.
Today, of course, computers figure prominently in explanations of living systems. People routinely describe the brain as computer-like, as though our memories are stored on a hard drive made of gray matter. Under scrutiny, that analogy is no less clunky than the figurative comparisons that preceded it. And the limitations of these metaphors go both ways. Machine learning, which involves training a computer to recognize patters by showing it large datasets of images or other information, is often described as teaching a computer brain to “see” the world a certain way. Which makes some sense: Both machines and humans amass knowledge based on what they’ve seen in the past.“Everything a computer ‘sees’ is based on what it ‘knows’... depending on what you mean by ‘sees,’” Emily Pittore, a software engineer at iRobot, wrote to me in an email. “I use scare quotes because I hesitate to apply the language of human cognition to computers too liberally.”
“If you mean ‘sees’ as ‘optical input,’ then computers always see the same thing,” she said. In other words, machines ignore minor aesthetic blips and sensor noise, while “humans have a much more complicated sensor—eyeballs and a brain,” she said.
They found that the same letters look different to people, depending on whether they can read Arabic. And though they focused on letters for their assessment, the researchers said their findings would apply to anything—objects, photographs, illustrations, and so on. The overarching takeaway was this: What you already know profoundly affects how you see. Which sounds intuitive, right? But these findings are more nuanced than they may seem.
“We’re not just saying, ‘Oh, you’re an expert, so you see things differently,’” said Robert Wiley, a graduate student in cognitive science at Johns Hopkins and the study’s lead author. “The subtle point is that it goes beyond your explicit knowledge to actually change your visual system. These are things we don’t have conscious access to.”
Which is why humans can’t really unlearn things neatly. Because we don’t know how to untangle what we see and how we see it in the first place. You might forget a fact or lose a skill you once had, but there’s no way to map—and therefore no way to deliberately refine—the ways in which exposure to certain inputs has altered your perceptions.Machines, however, can unlearn.
In fact, some computer scientists say it’s increasingly important that they’re designed for this purpose. Part of the promise of machine learning systems is that computers will be able to process tremendous data streams—for purposes like facial recognition, for example. Entire industries are transforming as a result of these computing powers. With the proliferation of sensitive data flowing through vast networks, humans need to be able to tell computers when and precisely how to forget huge swaths of what’s called data lineage—the complex information, computations, and derived data that propagate brain-like computer networks.
“Such forgetting systems must carefully track data lineage even across statistical processing or machine learning, and make this lineage visible to users,” wrote Yinzhi Cao and Junfeng Yang, computer science professors at Lehigh University and Columbia University, respectively. “They let users specify the data to forget with different levels of granularity… These systems then remove the data and revert its effects so that all future operations run as if the data had never existed.”
Cao and Yang outlined their idea for such a system in a paper for Security & Privacy, a journal of the Institute of Electrical and Electronics Engineers, in 2015. The ability to wipe a single thread of data from a much larger set has multiple potential benefits, they say. Someone could remove their own sensitive personal data from a machine. Academics could use unlearning to clean up or otherwise correct analytics data, thereby making a predictive algorithm more accurate.
The power to manipulate data this way could be seen as its own security threat—if data were altered maliciously, for example—but Cao told me protective measures would be possible. For example: “Before removing research results related to a person in the EU, Google needs a scan of the requester’s photo ID,” he said in an email. “This is just one method of authentication, and other methods involve username/password, two-factor authentication, fingerprints, and so on.”The idea has generated excitement among computer scientists. Cao and Yang are the recipients of a $1.2 million National Science Foundation grant to further develop the concept. If they’re successful, and if machine unlearning becomes as crucial and ubiquitous a computing feature as Cao and Yang suggest it should, what will forgetting systems mean for the way people think about the processing functionalities of the human brain? Not much, probably, until the next technology comes along and offers a more compelling analogy.
“There is much that we don’t know about brains. But we do know that they aren’t magical,”Gary Marcus, a professor of psychology and neural science at New York University, wrote in The New York Times last year. “They are just exceptionally complex arrangements of matter. Airplanes may not fly like birds, but they are subject to the same forces of lift and drag. Likewise, there is no reason to think that brains are exempt from the laws of computation.”
Human-machine metaphors have never been perfect, but they can be useful, even as computers learn and unlearn in ways that humans cannot. “We want to do more with the conceptual model provided by these giant calculating machines,” the cultural anthropologist Margaret Mead said in 1948, referring to computers, according to Ronald Kline’s book, The Cybernetics Moment. “There is no trap of saying the human body is a machine, but merely that the methods, especially the the mathematics used in these machine problems, may be available tools for thinking more precisely about human behavior.”
1 comments:
If you want to get the access to the information about the sender or receiver of the messages, the exact time and date, click remote monitoring software in order to find out more information about spy application.
Post a Comment