Models for assessing AI’s tradeoffs between productivity and existential risk can’t predict when or if AI will slip its leash, (though) they demonstrate how variables such as economic growth, risk, and risk tolerance shape the future of AI. If spurs a 10% annual growth rate, global incomes will increase 50-fold over 40 years. (But) log preferences in the simple model are willing to take a one-in-three chance of killing everyone to get a 50-fold increase in consumption. Yet even these risk-takers have a limit: When the existential risk from AI doubles, the ideal outcome is never letting AI run at all. “If entrepreneurs designing AIs are very tolerant of risk, they may not have the average person’s risk tolerance, and so they may be more willing to take these gambles."In June 2015, Sam Altman told a tech conference, “I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.”
His comments echoed a certain postapocalyptic New Yorker cartoon, but Altman, then the president of the startup accelerator Y Combinator, did not appear to be joking. In the next breath, he announced that he’d just funded a new venture focused on “AI safety research.” That company was OpenAI, now best known as the creator of ChatGPT.
The simultaneous cheerleading and doomsaying about AI has only gotten louder in the years since. Charles Jones, a professor of economics at Stanford Graduate School of Business, has been watching with interest as developers and investors like Altman grapple with the dilemma at the heart of this rapidly advancing technology. “They acknowledge this double-edged sword aspect of AI: It could be more important than electricity or the internet, but it does seem like it could potentially be more dangerous than nuclear weapons,” he says.
Out of curiosity, Jones, an expert on modeling economic growth, did some back-of-the-envelope math on the relationship between AI-fueled productivity and existential risk. What he found surprised him. It formed the basis of a new paper in which he presents some models for assessing AI’s tradeoffs. While these models can’t predict when or if advanced artificial intelligence will slip its leash, they demonstrate how variables such as economic growth, existential risk, and risk tolerance will shape the future of AI — and humanity.
There are still a lot of unknowns here, as Jones is quick to emphasize. We can’t put a number on the likelihood that AI will birth a new age of prosperity or kill us all. Jones acknowledges that both of those outcomes may prove unlikely, but also notes that they may be correlated. “It does seem that the same world where this fantastic intelligence helps us innovate and raise growth rates a lot also may be the world where these existential risks are real as well,” he says. “Maybe those two things go together.”
Rise of the Machines
Jones’ model starts with the assumption that AI could generate unprecedented economic growth. Just as people coming up with new ideas have driven the past few centuries of progress, AI-generated ideas could fuel the next wave of innovation. The big difference is that AI does not need years of education before it can produce breakthrough research or innovations. “The fact that it’s a computer program means you can just spin up a million instances of it,” Jones says. “And then you’ve got a million really, really smart researchers answering a question for you.”
Once scale laws kick in and AI’s capabilities increase exponentially, we could be looking at an economic expansion unlike any in history. Taking one of the most optimistic forecasts, Jones calculates that if AI spurs a 10% annual growth rate, global incomes will increase more than 50-fold over 40 years. In comparison, real per capita GDP in the U.S. doubled in the past 40 years.
It does seem that the same world where this fantastic intelligence helps us innovate and raise growth rates a lot also may be the world where these existential risks are real as well.Charles JonesNow for the downside: Let’s assume that such phenomenal growth comes with a 1% chance that AI will end the world in any given year. At what point would we decide that all this increased productivity is not worth the attendant danger? To estimate this, Jones built a simple model that uses the log utility curve, a common representation of consumer preferences, to represent aversion to risk. When he ran those numbers, he found that people would accept a substantial chance that AI would end humanity in the next 40 years.
“The surprising thing here is that people with log preferences in the simple model are willing to take a one-in-three chance of killing everyone to get a 50-fold increase in consumption,” Jones says. Yet even these risk-takers have a limit: When the existential risk from AI doubles, the ideal outcome under log utility is never letting AI run at all.
In scenarios where people have lower risk tolerance, they would accept slower growth in exchange for reduced risk. That raises the question of whose interests will guide the evolution of AI. “If the entrepreneurs who are designing these AIs are very tolerant of risk, they may not have the average person’s risk tolerance, and so they may be more willing to take these gambles,” Jones says.
However, his paper also suggests that it may not be wealthy countries like the U.S. that will be most willing to risk AI running amok. “Getting an extra thousand dollars is really valuable when you’re poor and less valuable when you’re rich,” he explains. Likewise, if AI brings huge bumps in living standards in poorer nations, it could make them more tolerant of its risks.
Healthy, Wealthy… and Wise?
Jones also built a more complex model that considers the possibility that AI will help us live healthier, longer lives. “In addition to inventing safer nuclear power, faster computer chips, and better solar panels, AI might also cure cancer and heart disease,” he says. Those kinds of breakthroughs would further complicate our relationship with this double-edged tech. If the average life expectancy doubled, even the most risk-averse people would be much more willing to take their chances with AI risk. “The surprise here is that cutting mortality in half suddenly turns your willingness to accept existential risk from 4% to 25% or even more,” Jones explains. In other words, people would be much more willing to gamble if the prize was a chance to live to 200.
The models also suggest that AI could mitigate the economic effects of falling birth rates, another subject Jones has recently written about. “If machines can create ideas, then the slowing of population growth may not be such a problem,” he says.
Jones’ models provide insights into the wildest visions of AI, such as the singularity — the fabled moment when technological growth becomes infinite. He found that, in practical terms, accelerated growth might be hard to distinguish from the singularity. “If growth rates were to go to 10% a year, that would be just as good as a singularity,” he says. “We’re all going to be as rich as Bill Gates.”
Overall, Jones cautions that none of his results are predictive or prescriptive. Instead, they’re meant to help refine our thinking about the double-edged sword of AI. As we rush toward a future where AI can’t be turned off, efforts to quantify and limit the potential for disaster will become even more essential. “Any investments in reducing that risk are really valuable,” Jones says.
Jul 9, 2024
How Does One Determine If AI's Risks Outweigh Its Promise?
Research at Stanford suggests AI could increase global income growth 50 times over 40 years. But there could be a 33% chance of AI killing humanity in the process. If AI risk 'merely' doubles, most people prefer not to let AI run at all.
The issue is that entrepreneurs and venture capital investors have a higher risk tolerance than most people so those are risks the Silicon Valley types are willing to take that a vast majority are not. And like it or not, few of the world's economies are led by tech entrepreneurs or investors. JL
0 comments:
Post a Comment