X

Google Asks for More AI Content Disclosures in Political Ads

The search giant added an explicit checkbox asking political advertisers to disclose "altered or synthetic content."

Ian Sherr Contributor and Former Editor at Large / News
Ian Sherr (he/him/his) grew up in the San Francisco Bay Area, so he's always had a connection to the tech world. As an editor at large at CNET, he wrote about Apple, Microsoft, VR, video games and internet troubles. Aside from writing, he tinkers with tech at home, is a longtime fencer -- the kind with swords -- and began woodworking during the pandemic.
Ian Sherr
2 min read
ai-neon-keyboard-gettyimages-1481664140

Google's evolving AI disclosure efforts follow similar moves by Instagram owner Meta.

Tatiana Lavrova/Getty Images

Google told advertisers on Monday that it's adding a newly required disclosure tool to its systems, designed to identify political ads that have been altered or created using artificial intelligence.

In an update on its advertising policies website, Google said it will now require advertisers to tick a checkbox if a political ad contains "synthetic or digitally altered content that inauthentically depicts real or realistic-looking people." Google said that in some cases, it will generate automatic in-ad disclosures, whereas in others it will still require advertisers to include a prominent disclosure in addition to marking the self-identifying checkmark box.

Google's moves are the latest in a series of efforts across the tech industry to respond to the growing waves of content that are created or altered by AI tools. Tech companies, including web giants such as Google owner Alphabet and Instagram owner Meta, have identified AI as a key technology for spreading disinformation and misinformation ahead of the 2024 US presidential election in November. Even more telling, Google's researchers said bad-faith actors overwhelmingly use AI to create disinformation about politicians and celebrities, according to a report last month in the Financial Times.

Read more: How Close Is That Photo to the Truth? What to Know in the Age of AI

ai-atlas-tag.png

Earlier this week, Meta announced an update to its policies around any posts to its Facebook, Instagram or Threads social networks that may be created or edited by AI. In those cases, the company said when it detects AI manipulation, it will add an "AI Info" button to the post, encouraging users to learn more about what its systems had detected. Adobe, Apple, OpenAI, Google and other tech firms have also promised to add metadata labels to images that are created or edited with AI tools, though bad-faith actors are often finding new ways to fool detection systems.

For Google, this new AI disclosure rule expands on efforts from the past two years. In 2022, the company banned intentionally misleading "deepfake" AI-manipulated likenesses of other people. Then, last year, the company updated its political content policies to require advertisers "prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events."

As CNET's Connie Guglielmo reported at the time, Google's policy went beyond images, video or audio that's created by AI. Google was also calling out whenever AI tools are used for edits that result in inaccurate depictions of actual events.