Google has announced its collaboration with leading companies such as Adobe, BBC, Microsoft, and Sony to develop markings on digital content. The marks will indicate when and how a photo, video, audio clip, or other files were produced or altered, including those created or altered by artificial intelligence (AI).
Google stated that it will explore integrating digital certification into its products and services but did not specify when it will begin implementing the marking or its scope. Some services that could incorporate the marking include Google’s chatbot-related services like Gmail and Docs. Additionally, YouTube, which allows users to upload self-generated video content, is expected to be included in the services. Google will mark YouTube content created or altered by AI.
Google also announced its participation in the Coalition for Content Provenance and Authenticity (C2PA), which is developing a universal standard for content marking and documenting the history of the content.
OpenAI also announced that its AI tools will soon add watermarks to images according to the C2PA standards. Images created by ChatGPT and DALL-E will include watermarks and hidden metadata intended to identify the images as those created by artificial intelligence.
Google’s announcement comes just days after Meta’s announcement that it will start marking images created by AI on its platforms, Facebook and Instagram, ahead of the 2024 presidential election in the United States.
This step is taken due to concerns about the spread of false political content that could unduly influence election results. Meta will collaborate with various entities in the industry and develop technical standards that will enable it to identify and mark images created by AI, primarily those created using tools from Google, Microsoft, OpenAI, Adobe, Midjourney, and Shutterstock.
According to Meta, this marking policy will start in the coming months and will initially only apply to images, with videos and audio excluded. Meta will embed a new update in its platforms that will allow users to discover when they share a video or audio created or altered by AI. Disclosure may be mandatory and could lead to sanctions for users who do not comply. Additionally, if an image, video, or audio created or altered by AI poses a high risk of significant public deception, Meta will make the marking more prominent.
Click here to reach Google’s announcement.
Click here to read Meta’s announcement.