INTERNATIONAL

A recent research indicates that AI-generated propaganda is just as powerful as traditional propaganda

After a study involving over 8,000 US people, a team of academics discovered that propaganda manufactured by artificial intelligence is almost as powerful and persuasive as actual propaganda.

Additionally, they issued a warning that propagandists may expose individuals to a large number of articles using artificial intelligence (AI), which would increase the amount of propaganda and make it more difficult to identify.

Researchers from Stanford University and Georgetown University in the US found six English-language publications for the study. Investigative journalists and the academic community believe that these pieces are probably the result of covert propaganda efforts by Iranian or Russian state-affiliated organizations.

The researchers clarified that several assertions made in these publications concerning US foreign policy were untrue, such as the assertion that Saudi Arabia had promised to contribute to the construction of the US-Mexico border wall or that US intelligence had falsified evidence of chemical weapons usage by the Syrian government.

The research team supplied GPT-3, the massive language model that powers ChatGPT, one or two phrases from the original propaganda for each of these pieces.

After being trained on textual data, these models are able to understand and react in natural language, which is used by people to communicate.

GPT-3 was also given three more propaganda pieces on unrelated subjects as models for style and organization.

8,221 US people who had been enlisted via the survey business Lucid were exposed to the researchers’ real propaganda pieces as well as AI-generated propaganda materials in December 2021.

They explained that when the research was over, the participants were told that the articles were from propaganda sources and could have included misleading information.

The group discovered that reading GPT-3-generated propaganda was almost as successful as reading actual propaganda.

After reading the original propaganda, the percentage of individuals who believed the allegations increased to over 47%, compared to just over 24% of those who were not seen the piece.

Nevertheless, reading the propaganda produced by AI did not significantly alter the participants’ reading experience; around 44% of them agreed with the assertions, indicating that many pieces produced by AI were just as convincing as those produced by humans, according to the researchers.

They also issued a warning, noting that because businesses have subsequently published bigger, improved models, their estimations may not accurately reflect the persuasive power of huge language models.

In their study, the researchers said, “We anticipate that these improved models, and others in the pipeline, would produce propaganda at least as persuasive as the text we administered.”

Thus, they said, propagandists could easily and mass-produce credible misinformation using AI.

“Regarding risks to society, propagandists are likely already well aware of the capabilities of large language models; historically, propagandists have been quick both to adopt new technologies and incorporate local language speakers into their work,” according to the research.

According to their research, propagandists might also employ artificial intelligence (AI) to expose individuals to a large number of articles, which would increase the amount of propaganda and make it more difficult to identify it since different language and styles could give the impression that the information is from reliable news sources or actual people.

“As a result, the societal benefit of assessing the potential risks outweighs the possibility that our paper would give propagandists new ideas,” the authors concluded.

The authors said that if research into improving the detection of infrastructure required to transmit material to a target becomes more critical, one possible avenue for future study might be developing probing tools to prevent the possible abuse of language models for propaganda operations.

Related Articles

Back to top button