BUSINESS

Tech firms come together to fight election manipulation caused by AI

Leading tech firms agreed on Friday to voluntarily take “reasonable precautions” to stop the use of artificial intelligence technologies to rig national and international elections.

At the Munich Security Conference, executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok announced a new framework for handling AI-generated deepfakes that are intended to deceive voters. Elon Musk’s X is among the twelve other businesses that have signed on to the agreement.

Nick Clegg, president of global affairs at Meta, the parent company of Facebook and Instagram, stated in an interview conducted prior to the summit that “everyone recognizes that no one tech company, no one government, and no one civil society organization is able to deal with the advent of this technology and its possibly nefarious use on their own.”

Aiming to “deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote,” the agreement is primarily symbolic but targets increasingly realistic AI-generated images, audio, and video.

The businesses have not promised to outlaw or delete deepfakes. Rather, the agreement describes the steps they will take to identify and mark misleading AI material when it is produced or shared on their platforms. It states that when the information begins to circulate, the businesses would exchange best practices and provide “swift and proportionate responses.”

Although a wide range of businesses were probably won over by the agreements’ ambiguity and absence of legally obligatory criteria, disgruntled supporters were hoping for more concrete guarantees.

As senior associate director of the Elections Project at the Bipartisan Policy Center Rachel Orey put it, “The language isn’t quite as strong as one might have expected.” “I believe it’s important to give credit where credit is due and recognize that the corporations do have a stake in ensuring that their products aren’t used to sabotage honest and transparent elections. However, it is entirely optional, and we will be monitoring their actions.

Every business “quite rightly has its own set of content policies,” according to Clegg.

He clarified, “This is not an attempt to put everyone in a straitjacket.” In any case, nobody in the business believes that you can handle a whole new technology paradigm by ignoring issues, attempting to play whack-a-mole, and looking for anything that you believe may mislead someone.

In addition, a number of American and European political figures attended the statement on Friday. Even while such an agreement can’t be complete, according to Vice President of the European Commission Vera Jourova, “it contains very impactful and positive elements.” She also cautioned that misinformation powered by AI would lead to “the end of democracy, not only in the EU member states” and asked other politicians to take responsibility for not using AI technologies deceitfully.

Over 50 nations are scheduled to have national elections in 2024, coinciding with the accord reached at the annual security gathering in the German city. This has previously been done by Bangladesh, Taiwan, Pakistan, and most recently, Indonesia.

Artificial intelligence-generated robocalls mimicking the voice of U.S. President Joe Biden attempted to dissuade voters from casting ballots in last month’s primary election in New Hampshire, as an example of the first attempt at election meddling.

A few days before to the November elections in Slovakia, audio recordings created by AI mimicked a candidate talking about intentions to manipulate the results and boost beer prices. As they circulated on social media, fact-checkers hurried to disprove them.

Politicians have also dabbled with technology, from integrating AI-generated imagery into advertisements to interacting with voters via chatbots.

Platforms are urged under the agreement to “pay attention to context and in particular to safeguarding artistic, satirical, political, and educational expression.”

The firms want to educate the public on how to avoid falling for AI fakes and will prioritize being transparent with users about their rules, according to the statement.

In addition to striving to identify and classify AI-generated material so that social media users can determine whether or not what they’re viewing is genuine, the majority of corporations have previously said that they are implementing protections on their own generative AI tools, which may change visuals and sounds. However, a lot of those suggested fixes haven’t been implemented yet, and the businesses are under pressure to take more action.

This pressure is especially strong in the United States, where Congress has not yet passed legislation governing AI in politics, allowing businesses to essentially run their own affairs.

Recent confirmation from the Federal Communications Commission states that AI-generated audio snippets used in robocalls are illegal; however, this does not apply to audio deepfakes that are shared on social media or in political commercials.

Several social media platforms currently have procedures in place to prevent misleading postings on election processes, whether or not they are produced by AI. According to Meta, it eliminates fraudulent postings intended to obstruct someone’s ability to participate in civic life as well as inaccurate information on “the dates, locations, times, and methods for voting, voter registration, or census participation.”

The agreement appears to be a “positive step,” according to Jeff Allen, co-founder of the Integrity Institute and a former Facebook data scientist. However, he would still like to see social media companies take additional steps to combat misinformation, like developing content recommendation systems that don’t put engagement first.

The agreement is “not enough,” according to Lisa Gilbert, executive vice president of Public Citizen, who also stated that AI firms should “hold back technology” like hyper-realistic text-to-video generators “until there are substantial and adequate safeguards in place to help us avert many potential problems.”

Notable signatories of the agreement on Friday include the security companies McAfee and TrendMicro, chip designer Arm Holdings, chatbot developers Anthropic and Inflection AI, voice-clone startup ElevenLabs, and Stability AI, which created the image-generator Stable Diffusion.

Midjourney, another well-known AI picture generator, is conspicuously missing. A request for comment on Friday was not immediately answered by the San Francisco-based company.

One of the shocks of Friday’s deal was the inclusion of X, who had not been mentioned in an earlier statement about the coming settlement. Following his takeover of the old Twitter, Musk drastically reduced the number of content-moderation teams and identified himself as a “free speech absolutist.”

“Every citizen and company has a responsibility to safeguard free and fair elections,” said Linda Yaccarino, CEO of X, in a statement released on Friday.

She said, “X is committed to doing its share, working with peers to counter AI threats while simultaneously preserving free speech and enhancing transparency.”

Related Articles

Back to top button