Martijn Rasser reports in Scientific American:
It is the human factor—weaknesses in our human psychology—not their technical sophistication that make deepfakes so effective. New research hints at how foundational the problem is. Confirmation bias—the tendency to frame new information to support our pre-existing beliefs—was a big factor in how people judged the veracity of the fake information. Deepfakes work so well because they have large audiences willing to believe and spread them. We often see what we want to be true—a desirability bias.
Public opinion shifts, skewed election results, mass confusion, ethnic violence, war. All of these events could easily be triggered by deepfakes—realistic seeming but falsified audio and video made with AI techniques. Leaders in government and industry, and the public at large are justifiably alarmed. Fueled by advances in AI and spread over the tentacles of social media, deepfakes may prove to be among the most destabilizing of forces humankind has faced in generations.
It will soon be impossible to tell by the naked eye or ear whether a video or audio clip is authentic. While propaganda is nothing new, the visceral immediacy of voice and image give deepfakes unprecedented impact and authority; as a result, both governments and industry are scrambling to develop ways to reliably detect them. Silicon Valley startup Amber, for example, is working on ways to detect even the most sophisticated altered video. You can imagine a day when we can verify the authenticity and provenance of a video by way of a digital watermark.
Developing deepfake detection technology is important, but it's only part of the solution. It is the human factor—weaknesses in our human psychology—not their technical sophistication that make deepfakes so effective. New research hints at how foundational the problem is.
After showing over 3,000 adults fake images accompanied by fabricated text, a group of researchers reached two conclusions. First, the more online experience and familiarity with digital photography one had, the more skeptical the person evaluating the information was. Second, confirmation bias—the tendency to frame new information to support our pre-existing beliefs—was a big factor in how people judged the veracity of the fake information.
The researchers recommend focusing on improved digital literacy to counter deepfakes. While sensible, this is just part of the answer. As a society, we need to go farther. Deepfakes work so well because they have large audiences willing to believe and spread them. We often see what we want to be true—a desirability bias.
To understand how a well-executed deepfake could play into desirability bias, consider these vignettes. The first is a video of a high school student that went viral on Twitter in January. Many saw a smug young man mocking a tribal elder. Others saw a nervous teenager not knowing how to react in a strange situation. This was unaltered footage. What was missing was context—the truth was more complex. People saw what they wanted to see. The second was a video of Speaker Nancy Pelosi crudely edited to make it appear she was drunkenly slurring her words. It rapidly spread over social media garnering millions of views, a preview of the power of political disinformation.
Today's information sources are increasingly fragmented and micro-targeted. It is easy to find outlets for news you agree with and want to hear, and equally easy to tune out what you don't. This plays into the hands of those who seek to disinform. Bad actors can hijack social media to amplify the desirability bias. Psychologists found that repeating the same message boosts the propaganda effect through a process called priming. The more you are exposed to a statement, the likelier you are to rate it as true.
There is a rich body of research on cognitive psychology and decision-making. One of the best works is Psychology of Intelligence Analysis, written by Richards Heuer, a longtime CIA officer. This book is a cornerstone for teaching the art and science of analytic tradecraft in the U.S. intelligence community. It also holds important lessons on how to help combat deepfakes and should be part of the foundation for a new educational approach in the deepfake era.
Two of Heuer's key points are that being aware of cognitive biases is not enough. You also have to apply methods that foster higher levels of critical thinking, such as structuring information using decision trees and causal diagrams, and by challenging assumptions.
American high schools and colleges should incorporate cognitive psychology into their mandatory curricula. In an era of constant information and disinformation bombardment, there is a societal need to improve critical thinking by teaching tools and techniques to tackle cognitive biases. While this will not be easy, even successfully teaching a small percentage of students these concepts will help. Teaching people to pause and assess information before blindly sharing a shocking video will help to stem a deepfake's spread.
We cannot look to technology alone to tackle the problem. The reason deepfakes work is rooted too deep. Our biases enable false media to flourish. In understanding how to spot and address our biases, within ourselves and in others, we stand a better chance of mitigating the deepfake threat.
0 comments:
Post a Comment