Kate Eichhorn reports in Wired:
In the world of print, adults determined what children could and could not access. Now, children are free to build their own worlds and to populate these worlds with their own content. The content is centered on the self (the selfie being emblematic). So, childhood has survived, but its nature is increasingly in the hands of young people themselves. The real crisis of the digital age is not the disappearance of childhood, but the specter of a childhood that can never be forgotten.
Several decades into the age of digital media, the ability to leave one’s childhood and adolescent years behind is now imperiled. Although exact numbers are hard to come by, it is evident that a majority of young people with access to mobile phones take and circulate selfies on a daily basis. There is also growing evidence that selfies are not simply a tween and teen obsession. Toddlers enjoy taking selfies, too, and whether intentionally or unintentionally, have even managed to put their images into circulation. What is the cost of this excessive documentation? More specifically, what does it mean to come of age in an era when images of childhood and adolescence, and even the social networks formed during this fleeting period of life, are so easily preserved and may stubbornly persist with or without one’s intention or desire? Can one ever transcend one’s youth if it remains perpetually present? The crisis we face concerning the persistence of childhood images was the least of concerns when digital technologies began to restructure our everyday lives in the early 1990s. Media scholars, sociologists, educational researchers, and alarmists of all political stripes were more likely to bemoan the loss of childhood than to worry about the prospect of childhood’s perpetual presence. A few educators and educational researchers were earnestly exploring the potential benefits of the internet and other emerging digital technologies, but the period was marked by widespread moral panic about new media technologies. As a result, much of the earliest research on young people and the internet sought either to support or to refute fears about what was about to unfold online.Some of the early concerns about the internet’s impact on children and adolescents were legitimate. The internet did make pornography, including violent pornography, more available, and it enabled child predators to more easily gain access to young people. Law enforcement agencies and legislators continue to grapple with these serious problems. However,
many early concerns about the internet were rooted in fear alone and were informed by longstanding assumptions about youth and their ability to make rational decisions.Many adults feared that if left to surf the web alone, children would suffer a quick and irreparable loss of innocence. These concerns were fueled by reports about what allegedly lurked online. At a time when many adults were just beginning to venture online, the internet was still commonly depicted in the popular media as a place where anyone could easily wander into a sexually charged multiuser domain (MUD), hang outwith computer hackers and learn the tricks of their criminal trade, or hone their skills as a terrorist or bomb builder. In fact, doing any of these things usually required more than a single foray onto the web. But that did little to curtail perceptions of the internet as a dark and dangerous place where threats of all kinds were waiting at the welcome gate.While the media obsessed over how to protect children from online pornography, perverts, hackers, and vigilantes, researchers in the applied and social sciences were busy producing reams of evidence-based studies on the supposed link between internet use and various physical and social disorders. Some researchers cautioned that spending too much time online would lead to greater levels of obesity, repetitive strain, tendonitis, and back injuries in young people. Others cautioned that the internet caused mental problems, ranging from social isolation and depression to a decreased ability to distinguish between real life and simulated situations.A common theme underpinning both popular and scholarly articles about the internet in the 1990s was that this new technology had created a shift in power and access to knowledge. A widely reprinted 1993 article ominously titled “Caution: Children at Play on the Information Highway” warned, “Dropping children in front of the computer is a little like letting them cruise the mall for the afternoon. But when parents drop their sons or daughters off at a real mall, they generally set ground rules: Don’t talk to strangers, don’t go into Victoria’s Secret, and here’s the amount of money you’ll be able to spend. At the electronic mall, few parents are setting the rules or even have a clue about how to set them.” If parents were simultaneously concerned and clueless, it had much to do with the fact that as the decade wore on, young people grew to outnumber adults in many regionsof what was then still commonly described as cyberspace. Practical parental questions became increasingly challenging to answer and, in some cases, even to ask: Who had the power to impose a curfew in this online realm? Where were the boundaries of this new and rapidly expanding space? And what sorts of relationships were children establishing there? Were young people who met online simply pen pals who exchanged letters in real time, or were they actual acquaintances? Could one’s child have sexual encounters, or just exchange messages about sex online? There was nothing new about parents worrying about where their children were and what they were doing, but these worries were exacerbated by new conceptual challenges. Parents were now having to make informed decisions about their children’s wellbeing in a realm that few of them understood or had even experienced firsthand.In such a context, it is easy to understand why the imperiled innocence of children was invoked as a rationale for increased regulation and monitoring of the internet. In the United States, the Communications Decency Act, signed into law by President Clinton in 1996, gained considerable support due to widespread fears that without increased regulation of communications, the nation’s children were doomed to become perverts and digital vigilantes. The act, which the American Civil Liberties Union would later successfully challenge in the Supreme Court as a violation of the First Amendment, authorized the US government to “encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services” and “to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material.” Those who drafted the act took at face value the claim that children’s perception of reality is invariably influenced by their interactions with media technologies (a claim based on earlier studies of young people’s interactions with film and television), and as aresult, filters are necessary.At least a few critics, however, recognized that discourses centered on children’s innocence were being used to promote online censorship without taking children’s actual needs into account. In a 1997 article published in Radical Teacher, the media theorist Henry Jenkins astutely observed that parents’, educators’, and politicians’ moral panic over the internet was nothing new. From attacks on comic books in the early 20th century to later panics about the negative effects of cinema, radio, and television, the argument that new media pose a threat to young people was already well rehearsed. Jenkins argued that the real problem was not the new media, but rather the myth of childhood innocence itself:The myth of “childhood innocence” “empties” children of any thoughts of their own, stripping them of theirown political agency and social agendas so that they may become vehicles for adult needs, desires, and politics … The “innocent” child is an increasingly dangerous abstraction when it starts to substitute in our thinking for actual children or when it helps justify efforts to restrict real children’s minds and to regulate their bodies. The myth of “childhood innocence,” which sees children only as potential victims of the adult world or as beneficiaries of paternalistic protection, opposes pedagogies that empower children as active agents in the educational process. We cannot teach children how to engage in critical thought by denying them access to challenging information or provocative images.Jenkins was not the only one to insist that the real challenge was to empower children and adolescents to use the internet in productive and innovative ways so as to build a new and vibrant public sphere. We now know that a critical mass of educators and parents did choose to allow children ample access to the internet in the 1990s and early 2000s. Those young people ended up building many of the social media and sharing economy platforms that would transform the lives of people of all ages by the end of the first decade of the new millennium. (In 1996, Facebook’s Mark Zuckerberg was 12 years old, and Airbnb’s Brian Chesky was 15.) But at the time, Jenkins had a hard sell—his argument was circulating in a culture where many people had already given up on the future of childhood. Among the more well-known skeptics was another media theorist, Neil Postman. Postman argued in his 1982 book The Disappearance of Childhood that new media were eroding the distinction between childhood and adulthood. “With the electricmedia’s rapid and egalitarian disclosure of the total content of the adult world, several profound consequences result,” he claimed. These consequences included a diminishment of the authority of adults and the curiosity of children. Although not necessarily invested in the idea of childhood innocence, Postman was invested in the idea and ideal of childhood, which he believed was already in decline. This, he contended, had much to do with the fact that childhood—a relatively recent historical invention—is a construct that has always been deeply entangled with the history of media technologies.While there have, of course, always been young people, a number of scholars have posited that the concept of childhood is an early modern invention. Postman not only adopted this position but also argued that this concept was one of the far-reaching consequences of movable type, which first appeared in Mainz, Germany, in the late 15th century. With the spread of print culture, orality was demoted, creating a hierarchy between those who could read and those who could not. The very young were increasingly placed outside the adult world of literacy. During this period, something else occurred: different types of printed works began to be produced for different types of readers. In the 16th century, there were no age-based grades or corresponding books. New readers, whether they were 5 or 35, were expected to read the same basic books. By the late 18th century, however, the world had changed. Children had access to children’s books, and adults had access to adult books. Children were now regarded as a separate category that required protection from the evils of the adult world. But the reign of childhood (according to Postman, a period running roughly from the mid-19th to the mid-20th centuries) would prove short-lived. Although earlier communications technologies and broadcasting mediums, from the telegraph to cinema, were already chipping away at childhood, the arrival of television in the mid-20th century marked the beginning of the end. Postman concludes, “Television erodes the dividing line between childhood and adulthood in three ways, all having to do with its undifferentiated accessibility: first, because it requires no instruction to grasp its form; second, because it does not make complex demands on either mind or behavior; and third, because it does not segregate its audience.”Although Postman’s book focuses on television, it contains a curious but rarely discussed side note on the potential impact of computing. In the final chapter, Postman poses and responds to six questions, including the following: “Are there any communication technologies that have the potential to sustain the need for childhood?” In response to his own question, he replies, “The only technology that has this capacity is the computer.” To program a computer, he explains, one must in essence learn a language, a skill that would have to be acquired in childhood: “Should it be deemed necessary that everyone must know how computers work, how they impose their special worldview, how they alter our definition of judgment—that is, should it be deemed necessary that there be universal computer literacy—it is conceivable that the schooling of the young will increase in importance and a youth culture different from adult culture might be sustained.” But things could turn out differently. If economic and political interests decide that they would be better served by “allowing the bulk of a semiliterate population to entertain itself with the magic of visual computer games, to use and be used by computers without understanding … childhood could, without obstruction, continue on its journey to oblivion.”At the time, Postman’s argument no doubt made a lot of sense. When he was writing his book—likely in longhand or on a typewriter—the idea that a future generation of children, even toddlers, would easily be able to use computers had not yet occurred to most people. In 1982, when The Disappearance of Childhood hit the shelves, the graphical user interface that would transform computing had yet to be launched on a mass scale. Unless Postman happened to have had access to a rare Xerox Star, which retailed for about $16,000 per unit in 1981, he presumably was not thinking about computers in their current form at all. He likely imagined that using computers for more than play would remain the purview of those with considerable expertise (an expertise akin to mastering a new language). Of course, this is not how the digital revolution played out.As the Xerox Star evolved into today’s familiar computing interface and later into the touch screens of mobile phones and tablets, the ability to program computers was no longer linked to the ability to use computers for a wide range of purposes beyond gaming. Thanks to Xerox’s graphical user interface, eventually popularized by Apple, by the 2000s one could do many things with computers without knowledge of or interest in their inner workings. The other thing that Postman did not anticipate is that young people would be more adept at building and programming computers than most older adults. Fluency in this new language, unlikemost other languages, did not deepen or expand with age. By the late 1990s, there was little doubt that adults were not in control of the digital revolution. The most ubiquitous digital tools and platforms of our era, from Google to Facebook to Airbnb, would all be invented by people just out of their teens. What was the result? In the end, childhood as it once existed (i.e., in the pre-television era) was not restored, but Postman’s fear that childhood would disappear also proved wrong. Instead, something quite unexpected happened.In the early 1980s, Postman and many others saw the line between children’s culture and adults’ culture rapidly dissolving, primarily because of the undifferentiating impact of television. The solution was to restore the balance—to reestablish the boundaries between these once separate cultures. Postman argued that if we could return to a pre-television era where children occupied one world and adults another, childhood might have some hope of surviving into the 21st century andwell beyond. Today, the distinction between childhood and adulthood has reemerged, but not in the way that Postman imagined.In our current digital age, child and adolescent culture is alive and well. Most young people spend hours online every day exploring worlds in which most adults take little interest and to which they have only limited access. But this is where the real difference lies. In the world of print, adults determined what children could and could not access—after all, adults operated the printing presses, purchased the books, and controlled the libraries. Now, children are free to build their own worlds and, more importantly, to populate these worlds with their own content. The content, perhaps not surprisingly, is predominantly centered on the self (theselfie being emblematic of this tendency). So, in a sense, childhood has survived, but its nature—what it is and how it is experienced and represented—is increasingly in the hands of young people themselves. If childhood was once constructed and recorded by adults and mirrored back to children (e.g., in a carefully curated family photo album or a series of home video clips), this is no longer the case. Today, young people create images and put them into circulation without the interference of adults.In sharp contrast to Postman’s prediction, childhood never did disappear. Instead, it has become ubiquitous in a new and unexpected way. Today, childhood and adolescence are more visible and pervasive than ever before. For the first time in history, children and adolescents have widespread access to the technologies needed to represent their lives, circulate these representations, and forge networks with each other, often with little or no adult supervision. The potential danger is no longer childhood’s disappearance, but rather the possibility of a perpetual childhood. The real crisis of the digital age is not the disappearance of childhood, but the specter of a childhood that can never be forgotten.
0 comments:
Post a Comment