Jessica Baron reports in Forbes:
Remember the plot: Thomas Anderson is a computer programmer by day and hacker called Neo by night. He’s recruited to join the “real world” by Morpheus, the leader of the human resistance against their AI overlords. He gives Neo the choice to take a red pill and become part of the resistance or take the blue pill and forget he ever knew there was a real world out there so he can rejoin the rest of humanity in serving as an organic power source for the machines.
This year, the sci-fi masterpiece The Matrix turns 20 years old.Written and directed by Lana and Lilly Wachowski (as The Wachowski Brothers) and produced by Joel Silver, the film was released on March 31, 1999, grossing over $460 million worldwide. It won 4 Academy Awards (as well as BAFTA and Saturn awards) and is a staple on any sci-fi “greatest hits list.” It also secured its place as a classic American film when it was added to the National Film Registry in 2012 for permanent preservation.
While the special effects may no longer impress us, what really stands out about the film after 20 years is the lingering suspicion that we’re being controlled by the technology we created. And, perhaps more frightening, that most of us would prefer to live in blissful ignorance rather than face the truth of living in a world where we have very little free will.
This raises the question: did we take the blue pill?
No doubt you remember the plot. Thomas Anderson is a computer programmer by day and hacker called Neo by night. He’s recruited to join the “real world” by Morpheus, the leader of the human resistance against their AI overlords. He gives Neo the choice to take a red pill and become part of the resistance or take the blue pill and forget he ever knew there was a real world out there so he can rejoin the rest of humanity in serving as an organic power source for the machines.
Neither choice is all that great, especially when the real world involves living on a bedraggled ship, eating gruel, and being chased by “agents” who are out to obliterate the resistance before it can free more minds from the Matrix. Neo, of course, takes the red pill and (20-year-old spoiler alert) becomes the hero.
There are plenty of pop philosophy books that have the deeper meanings covered in case you want to talk about Plato’s Cave Allegory, Descartes, etc. In general, the film is designed to make us think about free will, fate, the depths of oppression, the irony of creation that later dominates its inventor, and the power (and pain) of knowledge.
What is an "authentic" life?
The Matrix wants us to imagine the terrifying possibility of living in a false world where we’re being controlled by machines and juxtaposes this with a world where people are free, real, authentic.
Unfortunately, life outside the Matrix isn’t glamorous or even fun, and humans have all the same problems they did before – blind faith, denial, anger, love, lust, bitterness, and greed. I’m not saying it’s better to be controlled by an algorithm, but only that we shouldn’t so readily fool ourselves into thinking that we can escape it into some utopia.
And what of the lies we tell ourselves about what it means to be truly free? Neo manages to escape the great manipulation that the Matrix uses to placate its slimy human power sources, but as the “chosen one,” he’s not free to live his own authentic life. He’s pretty much conscripted by Morpheus into the most dangerous job possible using a combination of guilt and intimidation. It’s a good thing his life wasn’t all that glamorous back in the Matrix or he might have gone looking for Morpheus’ spare stash of blue pills.
Ignorance is bliss
While we’re meant to look down on those who still live in the Matrix, never questioning it, and blissfully ignorant of their servitude, we might also consider that their emotional experiences are real ones, even if their physical experiences are not. They seem happy to live in the world made for them. If that freaks you out, it might be helpful to note that we’re well on our way to a machine-made world, even without sentient robots. We already choose to ignore the writing on the wall as global tech companies like Facebook freely admit to controlling our behavior via social media. We let technology companies cross lines all the time and barely raise a fuss while we continue to scroll through our feeds.
Our creature comforts are too nice, too necessary (at least we believe) to give up, and we’ve proved over and over again that we’re unwilling to do so, even if it makes the world safer or fairer for other people. Think of the massive amounts of electronic waste we ship to developing countries, or the use of sweatshop labor used to put together our shiny devices – you’d rather not, right?Do you think about where all our electronic trash goes and who or what has to live among it? When 5G comes will you be excited about the streaming rate of your Netflix or demanding to know how companies plan to sustainably recycle your old devices (because your current devices won’t be able to access the new network)? What if I told you there was no way to truly recycle all the material and pay back the planet for every device manufactured?
Even the best among us aren’t Neo, or Morpheus. We’re Cypher, at best. He lasts a good 9 years trying to fight the good fight before deciding to turn traitor to his shipmates for the chance to go back and eat his fake steak in the fake world in blissful ignorance of the Matrix forevermore.
It’s so much easier to live in a Matrix of shiny tech ads and limited personal responsibility. Having to deal with the stark reality of planetary destruction is a real downer. That’s got to be someone else’s job, right?
Most of us have taken a big heaping spoonful of blue pills when it comes to the threats posed by emerging technologies as well. And who can blame us? The advancements made in computing, big data, robotics, and machine learning that might eventually turn into sentient machines are hard to understand and seem almost too bizarre to believe. If they become a threat to our existence, surely someone will intervene, right?
Who will build the future of humanity?
In the universe of The Matrix, humans created a sentient lifeform, but we’ve
yet to create anything close. The best we have are some sophisticated machine learning algorithms that can perform some tasks better than we can, but are not on the cusp of taking over the world.
And what are we developing AI for right now? A combination of drudgery and deeply important things that humans should arguably not turn over to machines entirely. We’ve got them coming up with HR algorithms to judge an employee’s worth, beating us at board and video games, predicting crime, recidivism, and appropriate jail sentences, predicting our chances of heart disease, diagnosing cancer, helping us build models of the world we ruined and predicting how we might fix it, serving as voice assistants (that we enjoy being marvelously rude to), suggesting things we might want to buy on Amazon based on our past purchases, and advising on trading decisions in our financial markets.
There are people dedicated to making sure they stay in check, but we don’t give them power, funding, or other resources, much less a political infrastructure through which to operate.
Plenty of our tech patriarchs have grave fears about AI: Bill Gates, Tim Berners-Lee, and more have already warned that we should proceed with
caution. Then there’s Elon Musk, who thinks that intelligent machines could be more dangerous than nuclear weapons. Stephen Hawking once declared that AI “could spell the end of the human race.” Even Alan Turing, who helped invent the term, said in 1951 that these kinds of machines could “take control.” They weren’t worried about our current “Weak AI,” or even the “narrow AI” we’re building into autonomous vehicles, but the full-blown able-to-enslave-humanity kind that we still seem to want to take a crack at developing.
And who is at the leading edge of AI innovation right now? IBM (currently involved in a class-action lawsuit for firing 20,000 employees older than 40 in the last six years), Google (which has faced multiple privacy, advertising, intellectual property, and discrimination lawsuits over the years), Facebook (which will be in court for the next decade fighting state, federal, and international privacy and consumer protection lawsuits and explaining data abuse related to Cambridge Analytica and various other hacks that seem to be announced weekly), Apple (which also has an impressive portfolio of lost lawsuits), and Microsoft (which was in court for over two decades battling against, and mostly losing or settling, antitrust allegations and patent infringement cases and is currently embroiled in a backlash over the use of
amounts of electronic waste we ship to developing countries, or the use of sweatshop labor used to put together our shiny devices – you’d rather not, right?
Do you think about where all our electronic trash goes and who or what has to live among it? When 5G comes will you be excited about the streaming rate of your Netflix or demanding to know how companies plan to sustainably recycle your old devices (because your current devices won’t be able to access the new network)? What if I told you there was no way to truly recycle all the material and pay back the planet for every device manufactured?
Even the best among us aren’t Neo, or Morpheus. We’re Cypher, at best. He lasts a good 9 years trying to fight the good fight before deciding to turn traitor to his shipmates for the chance to go back and eat his fake steak in the fake world in blissful ignorance of the Matrix forevermore.
It’s so much easier to live in a Matrix of shiny tech ads and limited personal responsibility. Having to deal with the stark reality of planetary destruction is a real downer. That’s got to be someone else’s job, right?
Most of us have taken a big heaping spoonful of blue pills when it comes to the threats posed by emerging technologies as well. And who can blame us? The advancements made in computing, big data, robotics, and machine learning that might eventually turn into sentient machines are hard to understand and seem almost too bizarre to believe. If they become a threat to our existence, surely someone will intervene, right?
Who will build the future of humanity?
In the universe of The Matrix, humans created a sentient lifeform, but we’ve yet to create anything close. The best we have are some sophisticated machine learning algorithms that can perform some tasks better than we can, but are not on the cusp of taking over the world.
And what are we developing AI for right now? A combination of drudgery and deeply important things that humans should arguably not turn over to machines entirely. We’ve got them coming up with HR algorithms to judge an employee’s worth, beating us at board and video games, predicting crime, recidivism, and appropriate jail sentences, predicting our chances of heart disease, diagnosing cancer, helping us build models of the world we ruined and predicting how we might fix it, serving as voice assistants (that we enjoy being marvelously rude to), suggesting things we might want to buy on Amazon based on our past purchases, and advising on trading decisions in our financial markets.
There are people dedicated to making sure they stay in check, but we don’t give them power, funding, or other resources, much less a political infrastructure through which to operate.
Plenty of our tech patriarchs have grave fears about AI: Bill Gates, Tim Berners-Lee, and more have already warned that we should proceed with
caution. Then there’s Elon Musk, who thinks that intelligent machines could be more dangerous than nuclear weapons. Stephen Hawking once declared that AI “could spell the end of the human race.” Even Alan Turing, who helped invent the term, said in 1951 that these kinds of machines could “take control.” They weren’t worried about our current “Weak AI,” or even the “narrow AI” we’re building into autonomous vehicles, but the full-blown able-to-enslave-humanity kind that we still seem to want to take a crack at developing.
And who is at the leading edge of AI innovation right now? IBM (currently involved in a class-action lawsuit for firing 20,000 employees older than 40 in the last six years), Google (which has faced multiple privacy, advertising,
intellectual property, and discrimination lawsuits over the years), Facebook (which will be in court for the next decade fighting state, federal, and international privacy and consumer protection lawsuits and explaining data abuse related to Cambridge Analytica and various other hacks that seem to be announced weekly), Apple (which also has an impressive portfolio of lost lawsuits), and Microsoft (which was in court for over two decades battling against, and mostly losing or settling, antitrust allegations and patent infringement cases and is currently embroiled in a backlash over the use of their technology in military weapons). Many of us use products created by these companies every day despite their transgressions.
It’s worth noting that Alphabet (Google’s parent company) and rival Microsoft have both warned in recent reports that their AI might cause ethical, technical, and legal issues that could negatively affect their brands. (Note: not the world or your family, but their brands and bottom lines.)
Who needs to build Skynet when we're happy to hand over our the information that makes up our very beings to these companies now?
So why are we continuing to build more sophisticated AI? Well, partly because engineers want to see if they can. At a conference held by Prague-based AI startup GoodAI last August, AI experts and thought leaders were asked a simple question: “Why should we bother trying to create human-level AI?” Granted, they were asked to give a quick response right off the top of their heads, but their answers were less-than-inspiring, especially for people who have dedicated their careers to AI: “To create a singularity, perhaps”; “To understand ourselves.” Curiosity and the desire to do good are nice and all, but are we really content to wait and see if we can build something that might harm us and then try to control it?
If we’re being honest, we’re now investing huge amounts of money in it for two main reasons: 1) because we think it can help us make or save huge amounts of money, and 2) because if we don’t, someone else will (namely, China). If you think of what could actually go wrong with sentient machines, these seem like pretty silly reasons to try it anyway, but they are, in fact, the ones that make the world go ‘round.
Will you keep taking the blue pill?
There seem to be three main approaches to dealing with fears about increasingly sophisticated AI: 1) Stop worrying about it; 2) Hire some ethicists and call it a more responsible approach, and 3) Build a diverse panel of experts and give them the power to truly approve or reject research proposals. We could absolutely use some more support for the 3rd, but none of these are likely to make an impact without serious international cooperation - precisely the kind humanity has proved incapable of in the early 21st century.
The good news is that The Matrix is fiction, not the future. In the film, humans only got one chance to take the pill. We get a new chance every day. Our votes, our dollars, our voices, our support of advocates, our efforts to understand the world better, even our choice of social media all give us an opportunity to support or reject the future other people are trying to decide for us.
0 comments:
Post a Comment