How Meta Is Planning for Elections in 2024

By Nick Clegg, President, Global Affairs

Takeaways

  • Meta’s comprehensive approach to major elections remains broadly consistent over many years.
  • We will block new political ads during the final week of the US election campaign.
  • We will require advertisers globally to disclose when they use AI or digital methods to create or alter a political or social issue ad in certain cases.

Next year, more than two billion people will head to the polls in elections across some of the world’s biggest democracies, including the United States, India, Indonesia, Mexico and the European Union. Over many years, Meta has developed a comprehensive approach for elections on our platforms. With so many important elections approaching, we are setting out how the policies and safeguards we have established over time will apply in 2024.

No tech company does more or invests more to protect elections online than Meta – not just during election periods but at all times. We have around 40,000 people working on safety and security, with more than $20 billion invested in teams and technology in this area since 2016. While much of our approach has remained consistent for some time, we’re continually adapting to ensure we are on top of new challenges, including the use of AI. We’ve also built the largest independent fact-checking network of any platform, with nearly 100 partners around the world to review and rate viral misinformation in more than 60 languages.

This approach is consistent with how we have sought to prevent abuse of our platforms during recent major elections in Nigeria, Thailand, Turkey and Argentina, and this year’s state and local elections in the US. While we are conscious that every election brings its own challenges and complexities, we’re confident our comprehensive approach puts us in a strong position to protect the integrity of next year’s elections on our platforms.

Industry-Leading Transparency Around Political Ads

Since 2018, we have provided industry-leading transparency for ads about social issues, elections or politics. Advertisers who run these ads are required to complete an authorization process and include a “paid for by” disclaimer. These ads are then stored in our publicly available Ad Library for seven years. For example, there are now more than 15 million US entries in our Ad Library.

Starting in the new year, advertisers will also have to disclose when they use AI or other digital techniques to create or alter a political or social issue ad in certain cases. This applies if the ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to depict a real person as saying or doing something they did not say or do. It also applies if an ad depicts a realistic-looking person that does not exist or a realistic-looking event that did not happen, alters footage of a real event, or depicts a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.

As in previous years, we will also block new political, electoral and social issue ads during the final week of the US election campaign. Ads that have previously run before this restriction period will be allowed to run during this time. Our rationale for this restriction period remains the same as it has since 2020: in the final days of an election, we recognize there may not be enough time to contest new claims made in ads. This restriction period will lift the day after the election. You can find more details of our approach to the 2024 US Presidential election in this fact sheet.

Preventing Election and Voter Interference

We continually review and update our election-related policies, and take action if content violates our Community Standards, including our policies on election and voter interference, hate speech, coordinating harm and publicizing crime, and bullying and harassment. We remove this content whether it was created by a person or AI.

Our teams fight both foreign interference and domestic influence operations, and have taken down more than 200 malicious influence campaigns involved in what we call Coordinated Inauthentic Behavior. We’ve also designated more than 700 hate groups around the world – including more than 400 white supremacist organizations – and we continue to identify and assess new hate groups, particularly when they are tied to real-world violence. We’re also investing in proactive threat detection and have expanded our policies to help address harassment against election officials and poll workers.

We label state-controlled media on Facebook, Instagram and Threads so that users know when content is from a publication that may be wholly or partially under the editorial control of a government. As we have since 2020, we also block ads from state-controlled media outlets targeting people in the US.

For more information about how Meta approaches elections, visit our Preparing for Elections page.