AI-generated content has played a much smaller role in global election disinformation than many officials and researchers had feared, according to a new analysis from Meta. In an update on its efforts to safeguard dozens of elections in 2024, the company said the AI content made up only a fraction of the election-related misinformation that was detected and labeled by its fact checkers.
“During the election period, in the major elections listed above, AI content ratings related to elections, politics, and social topics accounted for less than 1% of all verified misinformation,” the company shared in a post on blog, referring to the elections in the United States. US, UK, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as EU parliamentary elections.
The update comes after numerous government officials and researchers have been sounding the alarm for months about the role generative AI could play in boosting election disinformation in a year when more than 2 billion people were expected to go to the polls . But those fears have largely failed to materialize, at least on Meta’s platforms, according to the company’s president of global affairs, Nick Clegg.
“People were understandably concerned about the potential impact that generative AI would have on upcoming elections later this year, and there were all kinds of warnings about the potential risks of things like widespread deepfakes and campaigns of disinformation based on artificial intelligence,” Clegg said during a briefing with journalists. “From what we have monitored across our services, it appears that these risks have not materialized in any meaningful way and that any such impacts have been modest and limited in scope.”
Meta did not explain how much election-related AI content was captured by its fact-checkers in the run-up to major elections. The company sees billions of pieces of content every day, so even small percentages can add up to a large number of posts. Clegg, however, gave credence to Meta’s policies, including his own expansion of AI labeling earlier this year, below criticism by the Supervisory Board. It noted that Meta’s AI image generator blocked 590,000 requests to create images of Donald Trump, Joe Biden, Kamala Harris, JD Vance and Tim Walz in the month leading up to Election Day in the United States.
At the same time, Meta has increasingly taken steps to distance itself from politics, as have some past efforts to control misinformation. The company changed users’ default settings on Instagram and Threads in stop advising political content, and it has without priority news on Facebook. Mark Zuckerberg said it he regrets it how the company handled some of its misinformation policies during the pandemic.
Looking ahead, Clegg said Meta is still trying to find the right balance between enforcing its rules and allowing for free expression. “We know that when we enforce our policies, our error rates are still too high, which hinders freedom of expression,” he said. I think we now really want to redouble our efforts to improve the precision and accuracy with which we act.”