The next challenge is to do a more effective job of understanding what we have. Since data are not ends in themselves - at least for most of the population - the ultimate goal is to apply it in ways that benefit those who wish to use it.
Which all seems rather straightforward until you actually try to do it. The challenges are statistical and perceptual as the following article explains. Recognizing patterns is one level of understanding. Discerning where those patterns lead - if anywhere - is another. There is no one solution, but as in so much else having to do with optimizing the potential impact of technology, the most useful answers may lie in fields outside of the core discipline.
Science, medicine and art may all provide insights that help improve the utility of the knowledge being generated. Work by those, like MC Escher, hint at the potential for confusion or misinterpretation. But also that enlightenment may come from unexpected sources. JL
Benedict Carey reports in the New York Times:
The most important question when dealing with reams of digital data is when, and in what domain, analysts will be able to build a reliable catalog of digital patterns that provide meaningful “clues” to the underlying reality.
FOR the past year or so genetic scientists at the Albert Einstein College of Medicine in New York have been collaborating with a specialist from another universe: Daniel Kohn, a Brooklyn-based painter and conceptual artist.Mr. Kohn has no training in computers or genetics, and he’s not there to conduct art therapy classes. His role is to help the scientists with a signature 21st-century problem: Big Data overload.
Advanced computing produces waves of abstract digital data that in many cases defy interpretation; there’s no way to discern a meaningful pattern in any intuitive way. To extract some order from this chaos, analysts need to continually reimagine the ways in which they represent their data — which is where Mr. Kohn comes in. He spent 10 years working with scientists and knows how to pose useful questions. He might ask, for instance, What if the data were turned sideways? Or upside down? Or what if you could click on a point on the plotted data and see another dimension?“A lot of the value of his input is jolting us out of our comfort zone, and making us aware that we can and should be thinking about the representation of data in new ways,” said John Greally, director of Einstein’s Center for Epigenomics, who brought on Mr. Kohn.“The problem today is that biological data are often abstracted into the digital domain,” Dr. Greally added, “and we need some way to capture the gestalt, to develop an instinct for what’s important.”And so it is in many fields, whether predicting climate, flagging potential terrorists or making economic forecasts. The information is all there, great expanding mountain ranges of it. What’s lacking is the tracker’s instinct for picking up a trail, the human gut feeling for where to start looking to find patterns and meaning. But can such creative instincts really be trained systematically? And even if they could, wouldn’t it take years to do so?The answers are yes and no, at least when it comes to some advanced skills. And that should give analysts drowning in data some cause for optimism.Scientists working in a little-known branch of psychology called perceptual learning have shown that it is possible to fast-forward a person’s gut instincts both in physical fields, like flying an airplane, and more academic ones, like deciphering advanced chemical notation. The idea is to train specific visual skills, usually with computer-game-like modules that require split-second decisions. Over time, a person develops a “good eye” for the material, and with it an ability to extract meaningful patterns instantaneously.Perceptual learning is such an elementary skill that people forget they have it. It’s what we use as children to make distinctions between similar-looking letters, like U and V, long before we can read. It’s the skill needed to distinguish an A sharp from a B flat (both the notation and the note), or between friendly insurgents and hostiles in a fast-paced video game. By the time we move on to sentences and melodies and more cerebral gaming — “chunking” the information into larger blocks — we’ve forgotten how hard it was to learn all those subtle distinctions in the first place.The perceptual skills themselves are still there, however, and still trainable. We use them anytime we try to learn new material: say, different software for work, or differences in native trees and plants after moving across the country. Once our eyes — or other senses — have mastered these subtle perceptual differences, we can focus on putting the knowledge to work.
THE beauty of such learning is that it is automatic; there’s no thinking involved. “We don’t just see, we look; we don’t just hear, we listen,” wrote the field’s founder, Eleanor J. Gibson, in 1969. “Perceptual learning is self-regulated, in the sense that modification occurs without the necessity of external reinforcement. It is stimulus-oriented, with the goal of extracting and reducing” the information needed.That comment is so packed with meaning that it helps to slow down the tape. Perceptual learning is active. Our eyes (or other senses) are searching for the right clues. Automatically, no extra effort is required. We have to pay attention, of course, but there’s no need to turn the system on or tune it. It’s self-correcting — it tunes itself. The brain works to find the most meaningful sights or sounds and filter out the rest.How does this look in the real world?
Take learning to fly, a disorienting and sometimes terrifying experience that requires hundreds of hours in the air and in the classroom — many of them devoted to learning how to read an instrument panel. In the 1980s a cognitive scientist named Philip Kellman, who had studied Dr. Gibson’s work, wondered if there was a better — and quicker — way. The dials on the instrument panel are easy enough to read on their own, one at a time; but reading all of them at once, at a glance, is another skill altogether. It’s more about reflexes, and gut feeling, than reasoning.Dr. Kellman designed a video-game-like lesson: The student sees a panel and decides quickly what the dials are saying, collectively (there are five or six of them, depending on the plane). Below the panel on the computer screen are seven choices, including “straight climb,” “descending turn” and “level turn.” A chime sounds if the answer is correct; if wrong, a burp, and the correct answer is highlighted. Then up comes the next screen, with another instrument panel, and then another: all fast-paced, with instant feedback.In 1994, Dr. Kellman, now a professor at the University of California, Los Angeles, tested this perceptual learning module, as he calls it, on amateur pilots. After one hour of training, novices could read the panel as accurately and quickly as pilots with an average of 1,000 flying hours, he found. They’d built the same reading skill, at least on the ground, in a fraction of the time. “You still have to fly the plane, of course,” Dr. Kellman said. “But it’s a lot less stressful when you can read that panel without stopping to think about it.”Dr. Kellman and others have used variations on this method to quickly ramp up instincts in other complex fields, including dermatology, chemistry, cardiology and even surgery.In a recent experiment at the University of Virginia, researchers used a perceptual-learning module to train medical students about gallbladder removal. In the past, doctors removed gallbladders by making a long cut in the abdomen and performing open surgery. But since the 1980s many doctors have been doing the surgery by making tiny incisions and threading a slender tube called a laparoscope into the abdominal cavity. The scope is equipped with a tiny camera, and the surgeon must navigate through the cavity based on the images the scope transmits. All sorts of injuries can occur if the doctor misreads those images, and it usually takes hundreds of observed surgeries to master the skill.Half the students practiced on a computer module that showed short videos from real surgeries and had to decide quickly which stage of the surgery was pictured. The other half — the control group — studied the same videos as they pleased, rewinding if they wanted. The practice session lasted about 30 minutes. On a final exam testing their knowledge of the procedure, the perceptual-learning group trounced their equally experienced peers, scoring four times higher. Their instincts were much sharper.
No one knows yet the limits and drawbacks of leaning heavily on perceptual training. And of course the training is a complement to building expertise in a field, not a substitute for it. You can play video games all you want, but you still have to land the airplane or operate on a living human being.But this is no gimmick. The medical school at U.C.L.A. has adopted perceptual modules as part of its standard curriculum, to train skills like reading electrocardiograms, identifying rashes (there are many varieties, which all look the same to the untrained eye) and interpreting tissue samples from biopsies. The idea is that you can learn to quickly identify abnormalities. Such modules are equally applicable in any field of study or expertise that involves making subtle distinctions. Is that a rhombus or a trapezoid? An oak tree or a maple? The Chinese symbol for “family” or “house”? A positive-sloping line or a negative-sloping one?The modules sharpen the ability to make snap judgments so people “know” what they’re looking at without having to explain why (at least not right away).The most important question when dealing with reams of digital data is not whether perceptual skills will be centrally important. The question is when, and in what domain, analysts will be able to build a reliable catalog of digital patterns that provide meaningful “clues” to the underlying reality, whether it’s the effect of a genetic glitch, a low-pressure zone or a drop in the yen.
When that happens — and it will, in some field — scientists will gain a foothold on the digital El Capitan and a means to build a prototype for applying perceptual-learning techniques. Given the importance of defusing terrorist plots and mining health and economic data, digital instinct-building is likely to become crucial, a discipline where people with computational and science chops will have to grow their visual sixth sense, like sea captains who can read the sky or guides who can find trails in the Mojave.For now, it’s a lot easier to invite a visually creative expert over to the lab, to see what he or she can add.“One thing I try to argue is that it’s not just about bigger machines to crunch more data, and it’s not even about pattern recognition,” Mr. Kohn, the painter, said in a phone interview. “It’s about frameworks of recognition; how you choose to look, rather than what you’re trying to see. Scientists often think of visual images like graphs as the end result of their analysis. I try to get them to think visually from the beginning.”
0 comments:
Post a Comment