A pitched battle is raging. It is between the optimists who think (hope?) that all the technologically-driven displacement civilization is currently enduring will ultimately be a good thing, and the pessimists (realists?)who believe that the experience of job loss to date due to the tech/financial alliance which is credited with lowering living standards in the developed world is indicative of what is to come.
It is not yet clear which is right. History teaches us that transitions to new technologies and types of fuel can take 40 to 60 years. So society has another generation - or three - to go before the process that began with the dotcom era achieves some sort of conclusion. What we do know is that whatever work looks like in the future will probably not much resemble that which is being done now. Forewarned is forearmed. JL
Derek Thompson reports in The Atlantic :
For centuries, experts have predicted that machines would make workers obsolete. That moment may finally be arriving. Could that be a good thing?
1. Youngstown, U.S.A.
The end of work is still just a futuristic concept for most of the United States, but it is something like a moment in history for Youngstown, Ohio, one its residents can cite with precision: September 19, 1977.
For much of the 20th century, Youngstown’s steel mills delivered such great prosperity that the city was a model of the American dream, boasting a median income and a homeownership rate that were among the nation’s highest. But as manufacturing shifted abroad after World War II, Youngstown steel suffered, and on that gray September afternoon in 1977, Youngstown Sheet and Tube announced the shuttering of its Campbell Works mill. Within five years, the city lost 50,000 jobs and $1.3 billion in manufacturing wages. The effect was so severe that a term was coined to describe the fallout: regional depression .Youngstown was transformed not only by an economic disruption but also by a psychological and cultural breakdown. Depression, spousal abuse, and suicide all became much more prevalent; the caseload of the area’s mental-health center tripled within a decade. The city built four prisons in the mid-1990s—a rare growth industry. One of the few downtown construction projects of that period was a museum dedicated to the defunct steel industry.
This winter, I traveled to Ohio to consider what would happen if technology permanently replaced a great deal of human work. I wasn’t seeking a tour of our automated future. I went because Youngstown has become a national metaphor for the decline of labor, a place where the middle class of the 20th century has become a museum exhibit.
Derek Thompson talks with editor in chief James Bennet about the state of jobs in America. “Youngstown’s story is America’s story, because it shows that when jobs go away, the cultural cohesion of a place is destroyed,” says John Russo, a professor of labor studies at Youngstown State University. “The cultural breakdown matters even more than the economic breakdown.”In the past few years, even as the United States has pulled itself partway out of the jobs hole created by the Great Recession, some economists and technologists have warned that the economy is near a tipping point. When they peer deeply into labor-market data, they see troubling signs, masked for now by a cyclical recovery. And when they look up from their spreadsheets, they see automation high and low—robots in the operating room and behind the fast-food counter. They imagine self-driving cars snaking through the streets and Amazon drones dotting the sky, replacing millions of drivers, warehouse stockers, and retail workers. They observe that the capabilities of machines—already formidable—continue to expand exponentially, while our own remain the same. And they wonder: Is any job truly safe?
Futurists and science-fiction writers have at times looked forward to machines’ workplace takeover with a kind of giddy excitement, imagining the banishment of drudgery and its replacement by expansive leisure and almost limitless personal freedom. And make no mistake: if the capabilities of computers continue to multiply while the price of computing continues to decline, that will mean a great many of life’s necessities and luxuries will become ever cheaper, and it will mean great wealth—at least when aggregated up to the level of the national economy.
But even leaving aside questions of how to distribute that wealth, the widespread disappearance of work would usher in a social transformation unlike any we’ve seen. If John Russo is right, then saving work is more important than saving any particular job. Industriousness has served as America’s unofficial religion since its founding. The sanctity and preeminence of work lie at the heart of the country’s politics, economics, and social interactions. What might happen if work goes away?
T he U.S. labor force has been shaped by millennia of technological progress. Agricultural technology birthed the farming industry, the industrial revolution moved people into factories, and then globalization and automation moved them back out, giving rise to a nation of services. But throughout these reshufflings, the total number of jobs has always increased. What may be looming is something different: an era of technological unemployment, in which computer scientists and software engineers essentially invent us out of work, and the total number of jobs declines steadily and permanently.
This fear is not new. The hope that machines might free us from toil has always been intertwined with the fear that they will rob us of our agency. In the midst of the Great Depression, the economist John Maynard Keynes forecast that technological progress might allow a 15-hour workweek, and abundant leisure, by 2030. But around the same time, President Herbert Hoover received a letter warning that industrial technology was a “Frankenstein monster” that threatened to upend manufacturing, “devouring our civilization.” (The letter came from the mayor of Palo Alto, of all places.) In 1962, President John F. Kennedy said, “If men have the talent to invent new machines that put men out of work, they have the talent to put those men back to work.” But two years later, a committee of scientists and social activists sent an open letter to President Lyndon B. Johnson arguing that “the cybernation revolution” would create “a separate nation of the poor, the unskilled, the jobless,” who would be unable either to find work or to afford life’s necessities.
The job market defied doomsayers in those earlier times, and according to the most frequently reported jobs numbers, it has so far done the same in our own time. Unemployment is currently just over 5 percent, and 2014 was this century’s best year for job growth. One could be forgiven for saying that recent predictions about technological job displacement are merely forming the latest chapter in a long story called The Boys Who Cried Robot —one in which the robot, unlike the wolf, never arrives in the end.
The end-of-work argument has often been dismissed as the “Luddite fallacy,” an allusion to the 19th-century British brutes who smashed textile-making machines at the dawn of the industrial revolution, fearing the machines would put hand-weavers out of work. But some of the most sober economists are beginning to worry that the Luddites weren’t wrong, just premature. When former Treasury Secretary Lawrence Summers was an MIT undergraduate in the early 1970s, many economists disdained “the stupid people [who] thought that automation was going to make all the jobs go away,” he said at the National Bureau of Economic Research Summer Institute in July 2013. “Until a few years ago, I didn’t think this was a very complicated subject: the Luddites were wrong, and the believers in technology and technological progress were right. I’m not so completely certain now.” 2. Reasons to Cry Robot
What does the “end of work” mean, exactly? It does not mean the imminence of total unemployment, nor is the United States remotely likely to face, say, 30 or 50 percent unemployment within the next decade. Rather, technology could exert a slow but continual downward pressure on the value and availability of work—that is, on wages and on the share of prime-age workers with full-time jobs. Eventually, by degrees, that could create a new normal, where the expectation that work will be a central feature of adult life dissipates for a significant portion of society.
After 300 years of people crying wolf, there are now three broad reasons to take seriously the argument that the beast is at the door: the ongoing triumph of capital over labor, the quiet demise of the working man, and the impressive dexterity of information technology.
• Labor’s losses. One of the first things we might expect to see in a period of technological displacement is the diminishment of human labor as a driver of economic growth. In fact, signs that this is happening have been present for quite some time. The share of U.S. economic output that’s paid out in wages fell steadily in the 1980s, reversed some of its losses in the ’90s, and then continued falling after 2000, accelerating during the Great Recession. It now stands at its lowest level since the government started keeping track in the mid‑20th century.
A number of theories have been advanced to explain this phenomenon, including globalization and its accompanying loss of bargaining power for some workers. But Loukas Karabarbounis and Brent Neiman, economists at the University of Chicago, have estimated that almost half of the decline is the result of businesses’ replacing workers with computers and software. In 1964, the nation’s most valuable company, AT&T, was worth $267 billion in today’s dollars and employed 758,611 people. Today’s telecommunications giant, Google, is worth $370 billion but has only about 55,000 employees—less than a tenth the size of AT&T’s workforce in its heyday.
The paradox of work is that many people hate their jobs, but they are considerably more miserable doing nothing. • The spread of nonworking men and underemployed youth. The share of prime-age Americans (25 to 54 years old) who are working has been trending down since 2000. Among men, the decline began even earlier: the share of prime-age men who are neither working nor looking for work has doubled since the late 1970s, and has increased as much throughout the recovery as it did during the Great Recession itself. All in all, about one in six prime-age men today are either unemployed or out of the workforce altogether. This is what the economist Tyler Cowen calls “the key statistic” for understanding the spreading rot in the American workforce. Conventional wisdom has long held that under normal economic conditions, men in this age group—at the peak of their abilities and less likely than women to be primary caregivers for children—should almost all be working. Yet fewer and fewer are.
Economists cannot say for certain why men are turning away from work, but one explanation is that technological change has helped eliminate the jobs for which many are best suited. Since 2000, the number of manufacturing jobs has fallen by almost 5 million, or about 30 percent. Young people just coming onto the job market are also struggling—and by many measures have been for years. Six years into the recovery, the share of recent college grads who are “underemployed” (in jobs that historically haven’t required a degree) is still higher than it was in 2007—or, for that matter, 2000. And the supply of these “non-college jobs” is shifting away from high-paying occupations, such as electrician, toward low-wage service jobs, such as waiter. More people are pursuing higher education, but the real wages of recent college graduates have fallen by 7.7 percent since 2000. In the biggest picture, the job market appears to be requiring more and more preparation for a lower and lower starting wage. The distorting effect of the Great Recession should make us cautious about overinterpreting these trends, but most began before the recession, and they do not seem to speak encouragingly about the future of work.
• The shrewdness of software. One common objection to the idea that technology will permanently displace huge numbers of workers is that new gadgets, like self-checkout kiosks at drugstores, have failed to fully displace their human counterparts, like cashiers. But employers typically take years to embrace new machines at the expense of workers. The robotics revolution began in factories in the 1960s and ’70s, but manufacturing employment kept rising until 1980, and then collapsed during the subsequent recessions. Likewise, “the personal computer existed in the ’80s,” says Henry Siu, an economist at the University of British Columbia, “but you don’t see any effect on office and administrative-support jobs until the 1990s, and then suddenly, in the last recession, it’s huge. So today you’ve got checkout screens and the promise of driverless cars, flying drones, and little warehouse robots. We know that these tasks can be done by machines rather than people. But we may not see the effect until the next recession, or the recession after that.”
Some observers say our humanity is a moat that machines cannot cross. They believe people’s capacity for compassion, deep understanding, and creativity are inimitable. But as Erik Brynjolfsson and Andrew McAfee have argued in their book The Second Machine Age , computers are so dexterous that predicting their application 10 years from now is almost impossible. Who could have guessed in 2005, two years before the iPhone was released, that smartphones would threaten hotel jobs within the decade, by helping homeowners rent out their apartments and houses.to strangers on Airbnb? Or that the company behind the most popular search engine would design a self-driving car that could soon threaten driving, the most common job occupation among American men? In 2013, Oxford University researchers forecast that machines might be able to perform half of all U.S. jobs in the next two decades. The projection was audacious, but in at least a few cases, it probably didn’t go far enough. For example, the authors named psychologist as one of the occupations least likely to be “computerisable.” But some research suggests that people are more honest in therapy sessions when they believe they are confessing their troubles to a computer, because a machine can’t pass moral judgment. Google and WebMD already may be answering questions once reserved for one’s therapist. This doesn’t prove that psychologists are going the way of the textile worker. Rather, it shows how easily computers can encroach on areas previously considered “for humans only.”
A fter 300 years of breathtaking innovation, people aren’t massively unemployed or indentured by machines. But to suggest how this could change, some economists have pointed to the defunct career of the second-most-important species in U.S. economic history: the horse.
For many centuries, people created technologies that made the horse more productive and more valuable—like plows for agriculture and swords for battle. One might have assumed that the continuing advance of complementary technologies would make the animal ever more essential to farming and fighting, historically perhaps the two most consequential human activities. Instead came inventions that made the horse obsolete—the tractor, the car, and the tank. After tractors rolled onto American farms in the early 20th century, the population of horses and mules began to decline steeply, falling nearly 50 percent by the 1930s and 90 percent by the 1950s.
Humans can do much more than trot, carry, and pull. But the skills required in most offices hardly elicit our full range of intelligence. Most jobs are still boring, repetitive, and easily learned. The most-common occupations in the United States are retail salesperson, cashier, food and beverage server, and office clerk. Together, these four jobs employ 15.4 million people—nearly 10 percent of the labor force, or more workers than there are in Texas and Massachusetts combined. Each is highly susceptible to automation, according to the Oxford study. Technology creates some jobs too, but the creative half of creative destruction is easily overstated. Nine out of 10 workers today are in occupations that existed 100 years ago, and just 5 percent of the jobs generated between 1993 and 2013 came from “high tech” sectors like computing, software, and telecommunications. Our newest industries tend to be the most labor-efficient: they just don’t require many people. It is for precisely this reason that the economic historian Robert Skidelsky, comparing the exponential growth in computing power with the less-than-exponential growth in job complexity, has said, “Sooner or later, we will run out of jobs.”
Is that certain—or certainly imminent? No. The signs so far are murky and suggestive. The most fundamental and wrenching job restructurings and contractions tend to happen during recessions: we’ll know more after the next couple of downturns. But the possibility seems significant enough—and the consequences disruptive enough—that we owe it to ourselves to start thinking about what society could look like without universal work, in an effort to begin nudging it toward the better outcomes and away from the worse ones.
To paraphrase the science-fiction novelist William Gibson, there are, perhaps, fragments of the post-work future distributed throughout the present. I see three overlapping possibilities as formal employment opportunities decline. Some people displaced from the formal workforce will devote their freedom to simple leisure; some will seek to build productive communities outside the workplace; and others will fight, passionately and in many cases fruitlessly, to reclaim their productivity by piecing together jobs in an informal economy. These are futures of consumption , communal creativity , and contingency . In any combination, it is almost certain that the country would have to embrace a radical new role for government.
3. Consumption: The Paradox of Leisure
Work is really three things, says Peter Frase, the author of Four Futures , a forthcoming book about how automation will change America: the means by which the economy produces goods, the means by which people earn income, and an activity that lends meaning or purpose to many people’s lives. “We tend to conflate these things,” he told me, “because today we need to pay people to keep the lights on, so to speak. But in a future of abundance, you wouldn’t, and we ought to think about ways to make it easier and better to not be employed.”
Frase belongs to a small group of writers, academics, and economists—they have been called “post-workists”—who welcome, even root for, the end of labor. American society has “an irrational belief in work for work’s sake,” says Benjamin Hunnicutt, another post-workist and a historian at the University of Iowa, even though most jobs aren’t so uplifting. A 2014 Gallup report of worker satisfaction found that as many as 70 percent of Americans don’t feel engaged by their current job. Hunnicutt told me that if a cashier’s work were a video game—grab an item, find the bar code, scan it, slide the item onward, and repeat—critics of video games might call it mindless. But when it’s a job, politicians praise its intrinsic dignity. “Purpose, meaning, identity, fulfillment, creativity, autonomy—all these things that positive psychology has shown us to be necessary for well-being are absent in the average job,” he said.
The post-workists are certainly right about some important things. Paid labor does not always map to social good. Raising children and caring for the sick is essential work, and these jobs are compensated poorly or not at all. In a post-work society, Hunnicutt said, people might spend more time caring for their families and neighbors; pride could come from our relationships rather than from our careers.
The post-work proponents acknowledge that, even in the best post-work scenarios, pride and jealousy will persevere, because reputation will always be scarce, even in an economy of abundance. But with the right government provisions, they believe, the end of wage labor will allow for a golden age of well-being. Hunnicutt said he thinks colleges could reemerge as cultural centers rather than job-prep institutions. The word school , he pointed out, comes from skholē, the Greek word for “leisure.” “We used to teach people to be free,” he said. “Now we teach them to work.”
Hunnicutt’s vision rests on certain assumptions about taxation and redistribution that might not be congenial to many Americans today. But even leaving that aside for the moment, this vision is problematic: it doesn’t resemble the world as it is currently experienced by most jobless people. By and large, the jobless don’t spend their downtime socializing with friends or taking up new hobbies. Instead, they watch TV or sleep. Time-use surveys show that jobless prime-age people dedicate some of the time once spent working to cleaning and childcare. But men in particular devote most of their free time to leisure, the lion’s share of which is spent watching television, browsing the Internet, and sleeping. Retired seniors watch about 50 hours of television a week, according to Nielsen. That means they spend a majority of their lives either sleeping or sitting on the sofa looking at a flatscreen. The unemployed theoretically have the most time to socialize, and yet studies have shown that they feel the most social isolation; it is surprisingly hard to replace the camaraderie of the water cooler. Most people want to work, and are miserable when they cannot. The ills of unemployment go well beyond the loss of income; people who lose their job are more likely to suffer from mental and physical ailments. “There is a loss of status, a general malaise and demoralization, which appears somatically or psychologically or both,” says Ralph Catalano, a public-health professor at UC Berkeley. Research has shown that it is harder to recover from a long bout of joblessness than from losing a loved one or suffering a life-altering injury. The very things that help many people recover from other emotional traumas—a routine, an absorbing distraction, a daily purpose—are not readily available to the unemployed.
The transition from labor force to leisure force would likely be particularly hard on Americans, the worker bees of the rich world: Between 1950 and 2012, annual hours worked per worker fell significantly throughout Europe—by about 40 percent in Germany and the Netherlands—but by only 10 percent in the United States. Richer, college-educated Americans are working more than they did 30 years ago, particularly when you count time working and answering e-mail at home.
In 1989, the psychologists Mihaly Csikszentmihalyi and Judith LeFevre conducted a famous study of Chicago workers that found people at work often wished they were somewhere else. But in questionnaires, these same workers reported feeling better and less anxious in the office or at the plant than they did elsewhere. The two psychologists called this “the paradox of work”: many people are happier complaining about jobs than they are luxuriating in too much leisure. Other researchers have used the term guilty couch potato to describe people who use media to relax but often feel worthless when they reflect on their unproductive downtime. Contentment speaks in the present tense, but something more—pride—comes only in reflection on past accomplishments.
The post-workists argue that Americans work so hard because their culture has conditioned them to feel guilty when they are not being productive, and that this guilt will fade as work ceases to be the norm. This might prove true, but it’s an untestable hypothesis. When I asked Hunnicutt what sort of modern community most resembles his ideal of a post-work society, he admitted, “I’m not sure that such a place exists.” Less passive and more nourishing forms of mass leisure could develop. Arguably, they already are developing. The Internet, social media, and gaming offer entertainments that are as easy to slip into as is watching TV, but all are more purposeful and often less isolating. Video games, despite the derision aimed at them, are vehicles for achievement of a sort. Jeremy Bailenson, a communications professor at Stanford, says that as virtual-reality technology improves, people’s “cyber-existence” will become as rich and social as their “real” life. Games in which users climb “into another person’s skin to embody his or her experiences firsthand” don’t just let people live out vicarious fantasies, he has argued, but also “help you live as somebody else to teach you empathy and pro-social skills.”
But it’s hard to imagine that leisure could ever entirely fill the vacuum of accomplishment left by the demise of labor. Most people do need to achieve things through, yes, work to feel a lasting sense of purpose. To envision a future that offers more than minute-to-minute satisfaction, we have to imagine how
millions of people might find meaningful work without formal wages. So, inspired by the predictions of one of America’s most famous labor economists, I took a detour on my way to Youngstown and stopped in Columbus, Ohio.
4. Communal Creativity: The Artisans’ Revenge
Artisans made up the original American middle class. Before industrialization swept through the U.S. economy, many people who didn’t work on farms were silversmiths, blacksmiths, or woodworkers. These artisans were ground up by the machinery of mass production in the 20th century. But Lawrence Katz, a labor economist at Harvard, sees the next wave of automation returning us to an age of craftsmanship and artistry. In particular, he looks forward to the ramifications of 3‑D printing, whereby machines construct complex objects from digital designs.
The factories that arose more than a century ago “could make Model Ts and forks and knives and mugs and glasses in a standardized, cheap way, and that drove the artisans out of business,” Katz told me. “But what if the new tech, like 3-D-printing machines, can do customized things that are almost as cheap? It’s possible that information technology and robots eliminate traditional jobs and make possible a new artisanal economy … an economy geared around self-expression, where people would do artistic things with their time.”
The most common jobs are salesperson, cashier, food and beverage server, and office clerk. Each is highly susceptible to automation. In other words, it would be a future not of consumption but of creativity, as technology returns the tools of the assembly line to individuals, democratizing the means of mass production.
Something like this future is already present in the small but growing number of industrial shops called “makerspaces” that have popped up in the United States and around the world. The Columbus Idea Foundry is the country’s largest such space, a cavernous converted shoe factory stocked with industrial-age machinery. Several hundred members pay a monthly fee to use its arsenal of machines to make gifts and jewelry; weld, finish, and paint; play with plasma cutters and work an angle grinder; or operate a lathe with a machinist. When I arrived there on a bitterly cold afternoon in February, a chalkboard standing on an easel by the door displayed three arrows, pointing toward bathrooms , pewter casting , and zombies . Near the entrance, three men with black fingertips and grease-stained shirts took turns fixing a 60-year-old metal-turning lathe. Behind them, a resident artist was tutoring an older woman on how to transfer her photographs onto a large canvas, while a couple of guys fed pizza pies into a propane-fired stone oven. Elsewhere, men in protective goggles welded a sign for a local chicken restaurant, while others punched codes into a computer-controlled laser-cutting machine. Beneath the din of drilling and wood-cutting, a Pandora rock station hummed tinnily from a Wi‑Fi-connected Edison phonograph horn. The foundry is not just a gymnasium of tools. It is a social center.
Alex Bandar, who started the foundry after receiving a doctorate in materials science and engineering, has a theory about the rhythms of invention in American history. Over the past century, he told me, the economy has moved from hardware to software, from atoms to bits, and people have spent more time at work in front of screens. But as computers take over more tasks previously considered the province of humans, the pendulum will swing back from bits to atoms, at least when it comes to how people spend their days. Bandar thinks that a digitally preoccupied society will come to appreciate the pure and distinct pleasure of making things you can touch. “I’ve always wanted to usher in a new era of technology where robots do our bidding,” Bandar said. “If you have better batteries, better robotics, more dexterous manipulation, then it’s not a far stretch to say robots do most of the work. So what do we do? Play? Draw? Actually talk to each other again?”
You don’t need any particular fondness for plasma cutters to see the beauty of an economy where tens of millions of people make things they enjoy making—whether physical or digital, in buildings or in online communities—and receive feedback and appreciation for their work. The Internet and the cheap availability of artistic tools have already empowered millions of people to produce culture from their living rooms. People upload more than 400,000 hours of YouTube videos and 350 million new Facebook photos every day. The demise of the formal economy could free many would-be artists, writers, and craftspeople to dedicate their time to creative interests—to live as cultural producers. Such activities offer virtues that many organizational psychologists consider central to satisfaction at work: independence, the chance to develop mastery, and a sense of purpose. After touring the foundry, I sat at a long table with several members, sharing the pizza that had come out of the communal oven. I asked them what they thought of their organization as a model for a future where automation reached further into the formal economy. A mixed-media artist named Kate Morgan said that most people she knew at the foundry would quit their jobs and use the foundry to
start their own business if they could. Others spoke about the fundamental need to witness the outcome of one’s work, which was satisfied more deeply by craftsmanship than by other jobs they’d held.
Late in the conversation, we were joined by Terry Griner, an engineer who had built miniature steam engines in his garage before Bandar invited him to join the foundry. His fingers were covered in soot, and he told me about the pride he had in his ability to fix things. “I’ve been working since I was 16. I’ve done food service, restaurant work, hospital work, and computer programming. I’ve done a lot of different jobs,” said Griner, who is now a divorced father. “But if we had a society that said, ‘We’ll cover your essentials, you can work in the shop,’ I think that would be utopia. That, to me, would be the best of all possible worlds.”
5. Contingency: “You’re on Your Own”
One mile to the east of downtown Youngstown, in a brick building surrounded by several empty lots, is Royal Oaks, an iconic blue-collar dive. At about 5:30 p.m. on a Wednesday, the place was nearly full. The bar glowed yellow and green from the lights mounted along a wall. Old beer signs, trophies, masks, and mannequins cluttered the back corner of the main room, like party leftovers stuffed in an attic. The scene was mostly middle-aged men, some in groups, talking loudly about baseball and smelling vaguely of pot; some drank alone at the bar, sitting quietly or listening to music on headphones. I spoke with several patrons there who work as musicians, artists, or handymen; many did not hold a steady job.
“It is the end of a particular kind of wage work,” said Hannah Woodroofe, a bartender there who, it turns out, is also a graduate student at the University of Chicago. (She’s writing a dissertation on Youngstown as a harbinger of the future of work.) A lot of people in the city make ends meet via “post-wage arrangements,” she said, working for tenancy or under the table, or trading services. Places like Royal Oaks are the new union halls: People go there not only to relax but also to find tradespeople for particular jobs, like auto repair. Others go to exchange fresh vegetables, grown in urban gardens they’ve created amid Youngstown’s vacant lots. When an entire area, like Youngstown, suffers from high and prolonged unemployment, problems caused by unemployment move beyond the personal sphere; widespread joblessness shatters neighborhoods and leaches away their civic spirit. John Russo, the Youngstown State professor, who is a co-author of a history of the city, Steeltown USA , says the local identity took a savage blow when residents lost the ability to find reliable employment. “I can’t stress this enough: this isn’t just about economics; it’s psychological,” he told me.
Russo sees Youngstown as the leading edge of a larger trend toward the development of what he calls the “precariat”—a working class that swings from task to task in order to make ends meet and suffers a loss of labor rights, bargaining rights, and job security. In Youngstown, many of these workers have by now made their peace with insecurity and poverty by building an identity, and
some measure of pride, around contingency. The faith they lost in institutions—the corporations that have abandoned the city, the police who have failed to keep them safe—has not returned. But Russo and Woodroofe both told me they put stock in their own independence. And so a place that once defined itself single-mindedly by the steel its residents made has gradually learned to embrace the valorization of well-rounded resourcefulness.
Karen Schubert, a 54-year-old writer with two master’s degrees, accepted a part-time job as a hostess at a café in Youngstown early this year, after spending months searching for full-time work. Schubert, who has two grown children and an infant grandson, said she’d loved teaching writing and literature at the local university. But many colleges have replaced full-time professors with part-time adjuncts in order to control costs, and she’d found that with the hours she could get, adjunct teaching didn’t pay a living wage, so she’d stopped. “I think I would feel like a personal failure if I didn’t know that so many Americans have their leg caught in the same trap,” she said.
Perhaps the 20th century will strike future historians as an aberration, with its religious devotion to overwork in a time of prosperity. Among Youngstown’s precariat, one can see a third possible future, where millions of people struggle for years to build a sense of purpose in the absence of formal jobs, and where entrepreneurship emerges out of necessity. But while it lacks the comforts of the consumption economy or the cultural richness of Lawrence Katz’s artisanal future, it is more complex than an outright dystopia. “There are young people working part-time in the new economy who feel independent, whose work and personal relationships are contingent, and say they like it like this—to have short hours so they have time to focus on their passions,” Russo said.
Schubert’s wages at the café are not enough to live on, and in her spare time, she sells books of her poetry at readings and organizes gatherings of the literary-arts community in Youngstown, where other writers (many of them also underemployed) share their prose. The evaporation of work has deepened the local arts and music scene, several residents told me, because people who are inclined toward the arts have so much time to spend with one another. “We’re a devastatingly poor and hemorrhaging population, but the people who live here are fearless and creative and phenomenal,” Schubert said.
Whether or not one has artistic ambitions as Schubert does, it is arguably growing easier to find short-term gigs or spot employment. Paradoxically, technology is the reason. A constellation of Internet-enabled companies matches available workers with quick jobs, most prominently including Uber (for drivers), Seamless (for meal deliverers), Homejoy (for house cleaners), and TaskRabbit (for just about anyone else). And online markets like Craigslist and eBay have likewise made it easier for people to take on small independent projects, such as furniture refurbishing. Although the on-demand economy is not yet a major part of the employment picture, the number of “temporary-help services” workers has grown by 50 percent since 2010, according to the Bureau of Labor Statistics.
Some of these services, too, could be usurped, eventually, by machines. But on-demand apps also spread the work around by carving up jobs, like driving a taxi, into hundreds of little tasks, like a single drive, which allows more people to compete for smaller pieces of work. These new arrangements are already challenging the legal definitions of employer and employee , and there are many reasons to be ambivalent about them. But if the future involves a declining number of full-time jobs, as in Youngstown, then splitting some of the remaining work up among many part-time workers, instead of a few full-timers, wouldn’t necessarily be a bad development. We shouldn’t be too quick to excoriate companies that let people combine their work, art, and leisure in whatever ways they choose.
Today the norm is to think about employment and unemployment as a black-and-white binary, rather than two points at opposite ends of a wide spectrum of working arrangements. As late as the mid-19th century, though, the modern concept of “unemployment” didn’t exist in the United States. Most people lived on farms, and while paid work came and went, home industry—canning, sewing, carpentry—was a constant. Even in the worst economic panics, people typically found productive things to do. The despondency and helplessness of unemployment were discovered, to the bafflement and dismay of cultural critics, only after factory work became dominant and cities swelled. The 21st century, if it presents fewer full-time jobs in the sectors that can be
automated, could in this respect come to resemble the mid-19th century: an economy marked by episodic work across a range of activities, the loss of any one of which would not make somebody suddenly idle. Many bristle that contingent gigs offer a devil’s bargain—a bit of additional autonomy in exchange for a larger loss of security. But some might thrive in a market where versatility and hustle are rewarded—where there are, as in Youngstown, few jobs to have, yet many things to do.
6. Government: The Visible Hand
In the 1950s, Henry Ford II, the CEO of Ford, and Walter Reuther, the head of the United Auto Workers union, were touring a new engine plant in Cleveland. Ford gestured to a fleet of machines and said, “Walter, how are you going to get these robots to pay union dues?” The union boss famously replied: “Henry, how are you going to get them to buy your cars?”
As Martin Ford (no relation) writes in his new book, The Rise of the Robots , this story might be apocryphal, but its message is instructive. We’re pretty good at noticing the immediate effects of technology’s substituting for workers, such as fewer people on the factory floor. What’s harder is anticipating the second-order effects of this transformation, such as what happens to the consumer economy when you take away the consumers.
Technological progress on the scale we’re imagining would usher in social and cultural changes that are almost impossible to fully envision. Consider just how fundamentally work has shaped America’s geography. Today’s coastal cities are a jumble of office buildings and residential space. Both are expensive and tightly constrained. But the decline of work would make many office buildings unnecessary. What might that mean for the vibrancy of urban areas? Would office space yield seamlessly to apartments, allowing more people to live more affordably in city centers and leaving the cities themselves just as lively? Or would we see vacant shells and spreading blight? Would big cities make sense at all if their role as highly sophisticated labor ecosystems were diminished? As the 40-hour workweek faded, the idea of a lengthy twice-daily commute would almost certainly strike future generations as an antiquated and baffling waste of time. But would those generations prefer to live on streets full of high-rises, or in smaller towns?
Today, many working parents worry that they spend too many hours at the office. As full-time work declined, rearing children could become less overwhelming. And because job opportunities historically have spurred migration in the United States, we might see less of it; the diaspora of extended families could give way to more closely knitted clans. But if men and women lost their purpose and dignity as work went away, those families would nonetheless be troubled.
The decline of the labor force would make our politics more contentious. Deciding how to tax profits and distribute income could become the most significant economic-policy debate in American history. In The Wealth of Nations , Adam Smith used the term invisible hand to refer to the order and social benefits that arise, surprisingly, from individuals’ selfish actions. But to preserve the consumer economy and the social fabric, governments might have to embrace what Haruhiko Kuroda, the governor of the Bank of Japan, has called the visible hand of economic intervention. What follows is an early sketch of how it all might work.
In the near term, local governments might do well to create more and more-ambitious community centers or other public spaces where residents can meet, learn skills, bond around sports or crafts, and socialize. Two of the most common side effects of unemployment are loneliness, on the individual level, and the hollowing-out of community pride. A national policy that directed money toward centers in distressed areas might remedy the maladies of idleness, and form the beginnings of a long-term experiment on how to reengage people in their neighborhoods in the absence of full employment.
We could also make it easier for people to start their own, small-scale (and even part-time) businesses. New-business formation has declined in the past few decades in all 50 states. One way to nurture fledgling ideas would be to build out a network of business incubators. Here Youngstown offers an unexpected model: its business incubator has been recognized internationally, and its success has brought new hope to West Federal Street, the city’s main drag. Near the beginning of any broad decline in job availability, the United States might take a lesson from Germany on job-sharing. The German government gives firms incentives to cut all their workers’ hours rather than lay off some of them during hard times. So a company with 50 workers that might otherwise lay off 10 people instead reduces everyone’s hours by 20 percent. Such a policy would help workers at established firms keep their attachment to the labor force despite the declining amount of overall labor.
Spreading work in this way has its limits. Some jobs can’t be easily shared, and in any case, sharing jobs wouldn’t stop labor’s pie from shrinking: it would only apportion the slices differently. Eventually, Washington would have to somehow spread wealth, too.
One way of doing that would be to more heavily tax the growing share of income going to the owners of capital, and use the money to cut checks to all adults. This idea—called a “universal basic income”—has received bipartisan support in the past. Many liberals currently support it, and in the 1960s, Richard Nixon and the conservative economist Milton Friedman each proposed a version of the idea. That history notwithstanding, the politics of universal income in a world without universal work would be daunting. The rich could say, with some accuracy, that their hard work was subsidizing the idleness of millions of “takers.” What’s more, although a universal income might replace lost wages, it would do little to preserve the social benefits of work.
Oxford researchers have forecast that machines might be able to do half of all U.S. jobs within two decades. The most direct solution to the latter problem would be for the government to pay people to do something, rather than nothing. Although this smacks of old European socialism, or Depression-era “makework,” it might do the most to preserve virtues such as responsibility, agency, and industriousness. In the 1930s, the Works Progress Administration did more than rebuild the nation’s infrastructure. It hired 40,000 artists and other cultural workers to produce music and theater, murals and paintings, state and regional travel guides, and surveys of state records. It’s not impossible to imagine something like the WPA—or an effort even more capacious—for a post-work future.
What might that look like? Several national projects might justify direct hiring, such as caring for a rising population of elderly people. But if the balance of work continues to shift toward the small-bore and episodic, the simplest way to help everybody stay busy might be government sponsorship of a national online marketplace of work (or, alternatively, a series of local ones, sponsored by local governments). Individuals could browse for large long-term projects, like cleaning up after a natural disaster, or small short-term ones: an hour of tutoring, an evening of entertainment, an art commission. The requests could come from local governments or community associations or nonprofit groups; from rich families seeking nannies or tutors; or from other individuals given some number of credits to “spend” on the site each year. To ensure a baseline level of attachment to the workforce, the government could pay adults a flat rate in return for some minimum level of activity on the site, but people could always earn more by taking on more gigs. Although a digital WPA might strike some people as a strange anachronism, it would be similar to a federalized version of Mechanical Turk, the popular Amazon sister site where individuals and companies post projects of varying complexity, while so-called Turks on the other end browse tasks and collect money for the ones they complete. Mechanical Turk was designed to list tasks that cannot be performed by a computer. (The name is an allusion to an 18th-century Austrian hoax, in which a famous automaton that seemed to play masterful chess concealed a human player who chose the moves and moved the pieces.)
A government marketplace might likewise specialize in those tasks that required empathy, humanity, or a personal touch. By connecting millions of people in one central hub, it might even inspire what the technology writer Robin Sloan has called “a Cambrian explosion of mega-scale creative and intellectual pursuits, a generation of Wikipedia-scale projects that can ask their users for even deeper commitments.”
here’s a case to be made for using the tools of government to provide other incentives as well, to help people avoid the typical traps of joblessness and build rich lives and vibrant communities. After all, the members of the Columbus Idea Foundry probably weren’t born with an innate love of lathe operation or laser-cutting. Mastering these skills requires discipline; discipline requires an education; and an education, for many people, involves the expectation that hours of often frustrating practice will eventually prove rewarding. In a post-work society, the financial rewards of education and training won’t be as obvious. This is a singular challenge of imagining a flourishing post-work society: How will people discover their talents, or the rewards that come from expertise, if they don’t see much incentive to develop either?
Modest payments to young people for attending and completing college, skills-training programs, or community-center workshops might eventually be worth considering. This seems radical, but the aim would be conservative—to preserve the status quo of an educated and engaged society. Whatever their career opportunities, young people will still grow up to be citizens, neighbors, and even, episodically, workers. Nudges toward education and training might be particularly beneficial to men, who are more likely to withdraw into their living rooms when they become unemployed.7. Jobs and Callings
Decades from now, perhaps the 20th century will strike future historians as an aberration, with its religious devotion to overwork in a time of prosperity, its attenuations of family in service to job opportunity, its conflation of income with self-worth. The post-work society I’ve described holds a warped mirror up to today’s economy, but in many ways it reflects the forgotten norms of the mid-19th century—the artisan middle class, the primacy of local communities, and the unfamiliarity with widespread joblessness.
The three potential futures of consumption, communal creativity, and contingency are not separate paths branching out from the present. They’re likely to intertwine and even influence one another. Entertainment will surely become more immersive and exert a gravitational pull on people without much to do. But if that’s all that happens, society will have failed. The foundry in Columbus shows how the “third places” in people’s lives (communities separate from their homes and offices) could become central to growing up, learning new skills, discovering passions. And with or without such places, many people will need to embrace the resourcefulness learned over time by cities like Youngstown, which, even if they seem like museum exhibits of an old economy, might foretell the future for many more cities in the next 25 years.
On my last day in Youngstown, I met with Howard Jesko, a 60-year-old Youngstown State graduate student, at a burger joint along the main street. A few months after Black Friday in 1977, as a senior at Ohio State University, Jesko received a phone call from his father, a specialty-hose manufacturer near Youngstown. “Don’t bother coming back here for a job,” his dad said. “There aren’t going to be any left.” Years later, Jesko returned to Youngstown to work, but he recently quit his job selling products like waterproofing systems to construction companies; his customers had been devastated by the Great Recession and weren’t buying much anymore. Around the same time, a left-knee replacement due to degenerative arthritis resulted in a 10-day hospital stay, which gave him time to think about the future. Jesko decided to go back to school to become a professor. “My true calling,” he told me, “has always been to teach.”One theory of work holds that people tend to see themselves in jobs, careers, or callings. Individuals who say their work is “just a job” emphasize that they are working for money rather than aligning themselves with any higher purpose. Those with pure careerist ambitions are focused not only on income but also on the status that comes with promotions and the growing renown of their peers. But one pursues a calling not only for pay or status, but also for the intrinsic fulfillment of the work itself.
When I think about the role that work plays in people’s self-esteem—particularly in America—the prospect of a no-work future seems hopeless. There is no universal basic income that can prevent the civic ruin of a country built on a handful of workers permanently subsidizing the idleness of tens of millions of people. But a future of less work still holds a glint of hope, because the necessity of salaried jobs now prevents so many from seeking immersive activities that they enjoy.
After my conversation with Jesko, I walked back to my car to drive out of Youngstown. I thought about Jesko’s life as it might have been had Youngstown’s steel mills never given way to a steel museum—had the city continued to provide stable, predictable careers to its residents. If Jesko had taken a job in the steel industry, he might be preparing for retirement today. Instead, that industry collapsed and then, years later, another recession struck. The outcome of this cumulative grief is that Howard Jesko is not retiring at 60. He’s getting his master’s degree to become a teacher. It took the loss of so many jobs to force him to pursue the work he always wanted to do.
2 comments:
This is a very useful piece. Inter alia, it shows how to meld academic work - sometimes less useless than it is normally - with on-the-ground realism and awareness of others' lives.
Most of the theory we draw on was generated - as JM Keynes noted - by the defunct and deceased. One way to see this piece is as a call to develop new ways of thinking about the nexus between 'the good life', global politics, and democratic capitalism - i.e. the world our students are inheriting.
Thanks!
Post a Comment