Whatever its intentions, which themselves are open to debate, OpenAI's claim that it will prevent the use of its tools for political ends has been met rank skepticism.
This is based on the reality that tech companies have proven largely incapable of managing how their tools are applied by determined end users. JL
Gerrit DeVynck reports in the Washington Post:
OpenAI which makes the ChatGPT chatbot, DALL-E image generator and
provides AI to many companies, said it wouldn’t allow the use of its tech to build
apps for political campaigns, to discourage the misinformation about the voting process. It would also begin putting embedded watermarks into images made with its DALL-E
image-generator. (But) there have already been high-profile instances of election-related lies being generated by AI tools. Images made by AI have already shown up all over the web, including in Google search, being presented as real. Generative AI tools do not have an understanding of what is true or false.
“We work to anticipate and prevent relevant abuse — such as misleading ‘deepfakes,’ scaled influence operations, or chatbots impersonating candidates,” OpenAI said in the blog post.
Political parties, state actors and opportunistic internet entrepreneurs have used social media for years to spread false information and influence voters. But activists, politicians and AI researchers have expressed concern that chatbots and image generators could increase the sophistication and volume of political misinformation.
OpenAI’s measures come after other tech companies have also updated their election policies to grapple with the AI boom. In December, Google said it would restrict the kind of answers its AI tools give to election-related questions. It also said it would require political campaigns that bought ad spots from it to disclose when they used AI. Facebook parent Meta also requires political advertisers to disclose if they used AI.
But the companies have struggled to administer their own election misinformation polices. Though OpenAI bars using its products to create targeted campaign materials, an August report by the Post showed these policies were not enforced.
There have already been high-profile instances of election-related lies being generated by AI tools. In October, The Washington Post reported that Amazon’s Alexa home speaker was falsely declaring that the 2020 presidential election was stolen and full of election fraud.
Sen. Amy Klobuchar (D-Minn.) has expressed concern that ChatGPT could interfere with the electoral process, telling people to go to a fake address when asked what to do if lines are too long at a polling location.
If a country wanted to influence the U.S. political process it could, for example, build human-sounding chatbots that push divisive narratives in American social media spaces, rather than having to pay human operatives to do it. Chatbots could also craft personalized messages tailored to each voter, potentially increasing their effectiveness at low costs.
In the blog post, OpenAI said it was “working to understand how effective our tools might be for personalized persuasion.” The company recently opened its “GPT Store,” which allows anyone to easily train a chatbot using data of their own.
Generative AI tools do not have an understanding of what is true or false. Instead, they predict what a good answer might be to a question based on crunching through billions of sentences ripped from the open internet. Often, they provide humanlike text full of helpful information. They also regularly make up untrue information and pass it off as fact.
Images made by AI have already shown up all over the web, including in Google search, being presented as real images. They’ve also started appearing in U.S. election campaigns. Last year, an ad released by Florida Gov. Ron DeSantis’s campaign used what appeared to be AI-generated images of Donald Trump hugging former White House coronavirus adviser Anthony S. Fauci. It’s unclear which image generator was used to make the images.
Other companies, including Google and photoshop maker Adobe, have said they will also use watermarks in images generated by their AI tools. But the technology isn’t a magic cure for the spread of fake AI images. Visible watermarks can be easily cropped or edited out. Embedded, cryptographic ones, which are not visible to the human eye, can be distorted simply by flipping the image or changing its color.
Tech companies say they’re working to improve this problem and make them tamper-proof, but for now none seem to have figured out how to do that effectively yet.
0 comments:
Post a Comment