Deepfakes begin to appear on popular tech platforms, so Google joins the AI watermarking coalition

Google announced on Thursday that it would join a group of media and tech companies, including Adobe, Intel, and Microsoft, that are working to develop a signal that can be used to identify content that has been created or modified by artificial intelligence.

Google announced that it will be utilizing Adobe’s Content Credentials project, which allows creators to add a little “CR” symbol to AI media. This symbol connects to details regarding the whereabouts, timing, and methods of media editing. It would serve as a type of metadata that identifies artificial intelligence editing and enables viewers to confirm documents, videos, audio files, and photos.

In an era of rapidly evolving AI technology that can create convincingly fake media and alter real media, the Coalition for Content Provenance and Authenticity, or C2PA, offers this method for institutions like news organizations and social media platforms to share trusted digital media. However, the group does not support enforcing such measures on all content.

“The way we think we’re trying to solve the problem is first, we want to have you have the ability to prove as a creator what’s true,” said Dana Rao, who leads Adobe’s legal, security and policy organization and co-founded the coalition. “And then we want to teach people that if somebody is trying to tell you something that is true, they will have gone through this process and you’ll see the ‘CR,’ almost like a ‘Good Housekeeping’ seal of approval.”

Millions of people can now produce synthetic media thanks to the advancement of AI technology, which has made it feasible to automate editing tasks that were previously time-consuming and technically challenging. This has made room for both artistic pursuits and more sinister activities, such as sexual abuse and disinformation.

This has led to attempts to limit technology or clarify what constitutes artificial intelligence (AI) creation. Watermarking is one such idea that seeks to add signals, some obvious and some subtle, to make it easier to distinguish real from fake.

A line of consumer generative AI products, including editing and creation tools for AI, Bard, and an AI chatbot, have been developed by Google. It also owns YouTube, which according to an NBC News investigation hosts millions of views of fake news channels that distribute misleading content using similar AI tools.

In a news release about Google’s membership in the C2PA, Laurie Richardson, vice president of trust and safety at Google, stated that “collaborating with others in the industry to help increase transparency around digital content” is a crucial component of the company’s responsible approach to AI.

“This is why we are excited to join the committee and incorporate the latest version of the C2PA standard. It builds on our work in this space — including Google DeepMind’s SynthID, Search’s About this Image and YouTube’s labels denoting content that is altered or synthetic — to provide important context to people, helping them make more informed decisions.”

Similar to Microsoft, the other tech giants that have joined the C2PA have developed and invested in consumer generative-AI technology. Before artificial intelligence (AI) became widely used, Adobe itself developed digital media-editing tools, which it now integrates into its products to lower the entry barrier for producing complex, manipulated media on a large scale.

There have been consequences to these quick advancements. The top search results for celebrity women’s names plus the term “deepfakes” on Google and Microsoft search engines contain nonconsensual, sexually explicit deepfakes of the women. These kinds of content have overtaken popular social media platforms and eluded legal and criminal action. They include videos that use AI to “swap” faces and clone voices, as well as real photos that have been edited by AI and photos that have been created entirely by AI.

Rao cited the Israel-Hamas conflict as an ongoing example of a conflict in which questions have arisen about the veracity or manipulation of online images. Rao said Content Credentials would be helpful for verifying newsworthy media in situations like war zones. The BBC and The New York Times are among the news media outlets that have joined C2PA, and some camera manufacturers, like Canon, are integrating Content Credentials into their products.

Rao stated, “The media literacy component of this is crucial.” “We need to shift our culture to one of verification before trust.”

Leave a Reply

Your email address will not be published. Required fields are marked *