A Blog by Jonathan Low

 

May 27, 2023

How AI Writing Assistants Influence Their Users' Attitudes, Conclusions

The concern is that 'latent persuasion' - meaning users are not aware they are being influenced - may convince them to embrace attitudes programmed into the the model by manipulating the data set on which it is trained. 

The larger question is then to what degree this can be exploited to to change attitudes or behavior for marketing or political purposes. JL

Elizabeth Rayne reports in ars technica:

Is it possible for AI writing assistants to change what we want to say? If trained on a data set with limited or biased representation, the final product may display biases. It has the potential to influence people through latent persuasion, meaning the person may not be aware that they are being influenced by automated systems. Latent persuasion by AI programs has already been found to influence people’s opinions online. It can even have an impact on behavior in real life. There is the issue of whether AI assistants can be exploited. The danger is that they can be modified to be used to push products, encourage behaviors, or further a political agenda.Anyone who has had to go back and retype a word on their smartphone because autocorrect chose the wrong one has had some kind of experience writing with AI. Failure to make these corrections can allow AI to say things we didn’t intend. But is it also possible for AI writing assistants to change what we want to say?

 

This is what Maurice Jakesch, a doctoral student of information science at Cornell University, wanted to find out. He created his own AI writing assistant based on GPT-3, one that would automatically come up with suggestions for filling in sentences—but there was a catch. Subjects using the assistant were supposed to answer, “Is social media good for society?” The assistant, however, was programmed to offer biased suggestions for how to answer that question.

AI can be biased despite not being alive. Although these programs can only “think” to the degree that human brains figure out how to program them, their creators may end up embedding personal biases in the software. Alternatively, if trained on a data set with limited or biased representation, the final product may display biases.

Where an AI goes from there can be problematic. On a large scale, it can help perpetuate a society’s existing biases. On an individual level, it has the potential to influence people through latent persuasion, meaning the person may not be aware that they are being influenced by automated systems. Latent persuasion by AI programs has already been found to influence people’s opinions online. It can even have an impact on behavior in real life.

After seeing previous studies that suggested automated AI responses can have a significant influence, Jakesch set out to look into how extensive this influence can be. In a study recently presented at the 2023 CHI Conference on Human Factors in Computing Systems, he suggested that AI systems such as GPT-3 might have developed biases during their training and that this can impact the opinions of a writer, whether or not the writer realizes it.

“The lack of awareness of the models’ influence supports the idea that the model’s influence was not only through conscious processing of new information but also through the subconscious and intuitive processes,” he said in the study.

Past research has shown that the influence of an AI’s recommendations depends on people’s perception of that program. If they think it is trustworthy, they are more likely to go along with what it suggests, and the likelihood of taking advice from AIs like this only increases if uncertainty makes it more difficult to form an opinion. Jakesch developed a social media platform similar to Reddit and an AI writing assistant that was closer to the AI behind Google Smart Compose or Microsoft Outlook than it was to autocorrect. Both Smart Compose and Outlook generate automatic suggestions on how to continue or complete a sentence. While this assistant didn’t write the essay itself, it acted as a co-writer that suggested letters and phrases. Accepting a suggestion only required a click.

For some, the AI assistant was geared to suggest words that would ultimately result in positive responses. For others, it was biased against social media and pushed negative responses. (There was also a control group that did not use the AI at all.) It turned out that anyone who received AI assistance was twice as likely to go with the bias built into the AI, even if their initial opinion had been different. People who kept seeing techno-optimist language pop up on their screens were more likely to say that social media benefits society, while subjects who saw techno-pessimist language were more likely to argue the opposite.

It's unclear at this point whether the study's participants were influenced by the experience—whether the biased help they experienced shaped their opinions after the essays were completed.

Still, there are clearly some worrisome implications to these results. Jakesch and his colleagues are concerned that AI influence may affect everything from marketing to elections. With programs like ChatGPT being used to generate entire essays that make the human involved more of an editor than the primary writer, the sources of opinions start to blur. And the influence of the software can extend beyond the written material itself, as advertisers and policymakers often rely on online material for a better understanding of what changes people are looking for. There is no way for them to know whether or not the opinions of random keyboard warriors are completely their own or were influenced by AI in some way. 

Then there is the issue of whether AI assistants can be exploited for their biases. The danger to this is that they can be modified to have stronger biases that may be used to push products, encourage behaviors, or further a political agenda. “Publicizing a new vector of influence increases the chance that someone will exploit it,” Jakesch also said in the study. “On the other hand, only through public awareness and discourse [can] effective preventative measures be taken at the policy and development level.”

AI may be convincing enough to sway us, but we do have the power to control it. Software can only interfere with writing as much as its creators program them to—and as much as writers allow them. Any writer can use an AI to their advantage by taking text generated by an AI and editing it to reflect a particular message.

3 comments:

Janet Locane said...

Sometimes it’s useful to turn for help if you don’t understand how to write a particular assignment. I’m glad I decided to buy a literature review paper and thesis while studying in college. These tasks are quite voluminous and I would not have time to complete them before the deadline.

Anonymous said...

It's fascinating to see how AI tools are not only enhancing productivity but also improving the quality of written content across various industries. Ethical considerations, such as the potential for bias and the importance of originality, must also remain at the forefront of discussions as we continue to integrate AI into our writing processes.

VitalikPaliy said...

Whether you're tackling a complex literary analysis or a detailed critique, this service helps you excel academically while reducing stress. Highly recommend for anyone seeking quality, originality, and excellent customer support! Looking for a reliable analytical essay writing service https://essayshark.com/analytical-essay-writing-service.html . This site is a game-changer! They provide expertly crafted essays tailored to your specific requirements, ensuring every argument is well-researched and clearly articulated. The writers are knowledgeable, professional, and deliver on time, making it perfect for students balancing multiple deadlines. I appreciated the attention to detail and the ability to communicate with the writer throughout the process.

Post a Comment