Twenty tech companies developing artificial intelligence (AI) announced on Friday, Feb. 16, their commitment to prevent their software from influencing elections, including in the United States.
The agreement acknowledges that AI products pose a significant risk, especially in a year when around four billion people worldwide are expected to participate in elections. The document highlights concerns about deceptive AI in election content and its potential to mislead the public, posing a threat to the integrity of electoral processes.
The agreement also acknowledges that global lawmakers have been slow to respond to the rapid progress of generative AI, leading the tech industry to explore self-regulation. Brad Smith, vice chair and president of Microsoft, voiced his suppor in a statement:
The 20 signatories of the pledge are Microsoft, Google, Adobe, Amazon, Anthropic, Arm, ElevenLabs, IBM, Inflection AI, LinkedIn, McAfee, Meta, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic and X.
This is not a drill: We are in one of the most consequential election years in recent memory. Social-media companies need to step up to guard against the harms of #AI.
Our statement:https://t.co/shTZEQIsVt— Free Press (@freepress) February 16, 2024
However, the agreement is voluntary and doesn’t go as far as a complete ban on AI content in elections. The 1,500-word document outlines eight steps the companies commit to taking in 2024. Thes steps involve creating tools to differentiate AI-generated images from genuine content and ensuring transparency with the public about significant developments.
Free Press, an open internet advocacy group, stated that the commitment was an empty promise. It argued that tech companies had not followed previous pledges for election integrity after the 2020 election. The group advocates for increased oversight by human reviewers.
Related: Your right to bear AI could soon be infringed upon
U. S. Representative Yvette Clarke said she welcomed the tech accord and wants to see Congress build on it:
Clarke has sponsored legislation to regulate deepfakes and AI-generated content in political ads.
On Jan. 31, the Federal Communications Commission voted to outlaw AI-generated robocalls that contain AI-generated voices. This came after a fake robocall claiming to be from President Joe Biden ahead of January’s New Hampshire primary caused widespread alarm about the potential for counterfeit voices, images and video in politics.
Magazine: Crypto+AI token picks, AGI will take ‘a long time’, Galaxy AI to 100M phones: AI Eye