Besides, one identity is so overrated...JL
Cade Metz and Keith Collins report in the New York Times:
Researchers built a system that can analyze thousands of (real) celebrity snapshots, recognize common patterns, and create new images that look much the same — but are still a little different. Two neural networks — one that generated the images and another that tried to determine whether those images were real or fake. These are called generative adversarial networks, or GANs. One does its best to fool the other — and the other does its best not to be fooled.
The woman in the photo seems familiar.She looks like Jennifer Aniston, the “Friends” actress, or Selena Gomez, the child star turned pop singer. But not exactly.She appears to be a celebrity, one of the beautiful people photographed outside a movie premiere or an awards show. And yet, you cannot quite place her.That’s because she’s not real. She was created by a machine.The image is one of the faux celebrity photos generated by software under development at Nvidia, the big-name computer chip maker that is investing heavily in research involving artificial intelligence.At a lab in Finland, a small team of Nvidia researchers recently built a system that can analyze thousands of (real) celebrity snapshots, recognize common patterns, and create new images that look much the same — but are still a little different. TheLike other prominent A.I. researchers, the Nvidia team believes the techniques that drive this project will continue to improve in the months and years to come, generating significantly larger and more complex images.“We think we can push this further, generating not just photos but 3-D images that can be used in computer games and films,” said Jaakko Lehtinen, one of the researchers behind the project.Today, many systems generate images and sounds using a complex algorithm called a neural network. This is a way of identifying patterns in large amounts of data. By identifying common patterns in thousands of car photos, for instance, a neural network can learn to identify a car. But it can also work in the other direction: It can use those patterns to generate its own car photos.As it built a system that generates new celebrity faces, the Nvidia team went a step further in an effort to make them far more believable. It set up two neural networks — one that generated the images and another that tried to determine whether those images were real or fake. These are called generative adversarial networks, or GANs. In essence, one system does its best to fool the other — and the other does its best not to be fooled.“The computer learns to generate these images by playing a cat-and-mouse game against itself,” said Mr. Lehtinen.A second team of Nvidia researchers recently built a system that can automatically alter a street photo taken on a summer’s day so that it looks like a snowy winter scene. Researchers at the University of California, Berkeley, have designed another that learns to convert horses into zebras and Monets into Van Goghs. DeepMind, a London-based A.I. lab owned by Google, is exploring technology that can generate its own videos. And Adobe is fashioning similar machine learning techniques with an eye toward pushing them into products like Photoshop, its popular image design tool.Trained designers and engineers have long used technology like Photoshop and other programs to build realistic images from scratch. This is what movie effects houses do. But it is becoming easier for machines to learn how to generate these images on their own, said Durk Kingma, a researcher at OpenAI, the artificial intelligence lab founded by Tesla chief executive Elon Musk and others, who specializes in this kind of machine learning.“We now have a model that can generate faces that are more diverse and in some ways more realistic than what we could program by hand,” he said, referring to Nvidia’s work in Finland.But new concerns come with the power to create this kind of imagery.With so much attention on fake media these days, we could soon face an even wider range of fabricated images than we do today.“The concern is that these techniques will rise to the point where it becomes very difficult to discern truth from falsity,” said Tim Hwang, who previously oversaw A.I. policy at Google and is now director of the Ethics and Governance of Artificial Intelligence Fund, an effort to fund ethical A.I. research. “You might believe that accelerates problems we already have.”The idea of generative adversarial networks was originally developed in 2014 by a researcher named Ian Goodfellow, while he was a Ph.D. student at the University of Montreal. He dreamed up the idea after an argument at a local bar, and built the first prototype that same night. Now Mr. Goodfellow is a researcher at Google, and his idea is among the most important and widely explored concepts in the rapidly accelerating world of artificial intelligence.Though this kind of photo generation is currently limited to still images, many researchers believe it could expand to videos, games and virtual reality. But Mr. Kingma said this could take years, just because it will require much larger amounts of computing power. That is the primary problem that Nvidia is also working on, along with other chip makers.Researchers are also using a wide range of other machine learning techniques to edit video in more convincing — and sometimes provocative — ways.In August, a group at the University of Washington made headlines when they built a system that could put new words into the mouth of a Barack Obama video. Others, including Pinscreen, a California start-up, and iFlyTek of China, are developing similar techniques using images of President Donald Trump.The results are not completely convincing. But the rapid progress of GANs and other techniques point to a future where it becomes easier for anyone to generate faux images or doctor the real thing. That is cause for real concern among experts like Mr. Hwang.Eliot Higgins, the founder of Bellingcat, an organization that analyzes current events using publicly available images and video, pointed out that fake images are by no means a new problem. In the years since the rise of Photoshop, the onus has been on citizens to approach what they view online with skepticism.But many of us still put a certain amount of trust in photos and videos that we don’t necessarily put in text or word of mouth. Mr. Hwang believes the technology will evolve into a kind of A.I. arms race pitting those trying to deceive against those trying to identify the deception.Mr. Lehtinen downplays the effect his research will have on the spread of misinformation online. But he does say that, as a time goes on, we may have to rethink the very nature of imagery. “We are approaching some fundamental questions,” he said.
0 comments:
Post a Comment