The European Commission is pushing major tech platforms to implement rigorous safeguards against generative AI-driven disinformation, citing the critical threat of deepfakes ahead of upcoming national and EU-wide elections. While the forthcoming EU AI Act will eventually mandate user disclosures for AI-generated content, the Commission is currently leveraging its voluntary Code of Practice on Disinformation as a vital stop-gap measure to ensure platforms proactively mitigate these risks.
The Role of the Digital Services Act (DSA)
EU officials have clarified that adherence to the non-binding Code is viewed as a key indicator of compliance with the Digital Services Act (DSA). This hard-law regulation forces “very-large-online-platforms” (VLOPs) and search engines (VLOSEs) to actively assess and minimize societal risks, including the spread of manipulated media.
“Upcoming national elections and the EU elections will be an important test for the Code that platforms signatories should not fail,” said Věra Jourová, European Commission Vice President. “Platforms will need to take their responsibility seriously, in particular in view of the DSA that requires them to mitigate the risks they pose for elections.”
The latest transparency reports from 44 signatories—including Google, Meta, Microsoft, and TikTok—have been published via the EU’s Disinformation Code Transparency Center, marking the most comprehensive data set since the initiative began in 2018.
Google: Watermarking and Metadata
Google’s report highlights its commitment to responsible AI development, specifically addressing concerns surrounding misinformation in its Bard chatbot and Search products. The tech giant plans to integrate watermarking and metadata techniques into its generative models. Furthermore, Google committed to leveraging the IPTC Photo Metadata Standard to label AI-generated images in Search, allowing creators and publishers to apply similar markup for transparency.
Microsoft: Governance and Content Provenance
Microsoft, a key backer of OpenAI, emphasizes a “whole of company” approach to AI governance. Its strategy includes the implementation of a Responsible AI standard and partnerships with organizations like Truepic and C2PA to combat manipulated media. While Microsoft has faced scrutiny over early Bing AI performance, it claims to have implemented defensive search interventions and robust user reporting processes to flag harmful content.
TikTok and Meta: Policy Updates
TikTok has updated its community guidelines to require users to disclose AI-generated or manipulated content that depicts realistic scenes. The platform remains focused on fighting covert influence operations (CIOs) and is collaborating with the Partnership on AI (PAI) to develop better detection capabilities. Similarly, Meta is participating in the Code’s Task Force Working Group on Generative AI and has launched a “Community Forum on Generative AI” to gather public feedback on ethical AI principles.
The Threat of Kremlin Propaganda
Beyond generative AI, the EU is intensifying its focus on Russian state-sponsored disinformation. Jourová warned that the Kremlin is weaponizing information to undermine democratic values, creating a “multi-million euro weapon of mass manipulation.”
Recent platform reports demonstrate the scale of this issue: YouTube terminated over 400 channels linked to the Internet Research Agency (IRA) between January and April 2023, while TikTok fact-checked 832 war-related videos, removing 211. Microsoft reported that Bing Search intervened in nearly 800,000 queries related to the Ukraine crisis to promote reliable information.
Data Access and Platform Accountability
The Commission continues to demand better access to data for researchers to scrutinize disinformation flows. Jourová specifically criticized platforms that fail to invest in fact-checking for smaller member states and languages. Notably, X (formerly Twitter) remains under intense pressure after withdrawing from the Code, with early data suggesting it currently performs the worst regarding disinformation ratios. Under the DSA, the EU now holds the power to fine non-compliant platforms up to 6% of their global annual turnover.
Europe wants platforms to label AI-generated content to fight disinformation
EU and US lawmakers move to draft AI Code of Conduct fast
