Meta, the parent company of Facebook and Instagram, has outlined its strategy to combat the misuse of generative artificial intelligence (AI) to ensure the integrity of the electoral process on its platforms ahead of the 2024 European Parliament elections in June.
In a blog post on Feb. 25, Meta’s head of EU Affairs, Marco Pancini, said the principles behind the platform’s “Community Standards” and “Ad Standards” will also apply to AI-generated content.
“AI-generated content is also eligible to be reviewed and rated by our independent fact-checking partners,” he stated, with one of the ratings to show if the content is “altered,” meaning “faked, manipulated or transformed audio, video, or photos.”
The platform’s policies already require photorealistic images created using Meta’s AI tools to be labeled as such.
This latest announcement revealed that Meta is also building new features to label AI-generated content created by other tools, such as Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock, that users post to any of its platforms.
Additionally, Meta said it plans to add a feature for users to disclose when they have shared an AI-generated video or audio in order for it to be flagged and labeled, with potential penalties for failing to do so.
Related: Texas firm faces criminal probe for misleading US voters with Joe Biden AI
Advertisers running political, social or election-related ads that were altered or created using AI must also disclose its usage. The blog post said that between July and December 2023, Meta removed 430,000 ads across the European Union for failing to carry a disclaimer.
This topic has become increasingly relevant as major worldwide elections are set to take place in 2024. Prior to this most recent update, both Meta and Google have spoken out about rules regarding AI-generated political advertising on their platforms.
On Dec. 19, 2023, Google said it would limit answers to election queries on its AI chatbot Gemini — which was called Bard at the time — and its generative search feature in the lead-up to the 2024 presidential election in the United States.
OpenAI, the developer of the AI chatbot ChatGPT, has also tried to dispel fears regarding AI interference in global elections by creating internal standards to monitor activity on its platforms.
On Feb. 17, 20 companies, including Microsoft, Google, Anthropic, Meta, OpenAI, Stability AI and X, all signed a pledge to curb AI election interference, acknowledging the potential danger of the situation if not controlled.
Governments around the world have also taken action to combat AI misuse ahead of local elections. The European Commission initiated a public consultation on proposed election security guidelines to reduce democratic threats posed by generative AI and deepfakes.
In the U. S., AI-generated voices in automated phone scams were banned and made illegal after it was used in scam robocalls as a deepfake of President Joe Biden began circulating and misleading the public.