YouTube is taking steps designed to enhance online safety for teenagers, a move that arrives amid growing concerns about the impact of social media on youth mental health. The platform announced new measures to limit teen exposure to potentially harmful video content, particularly those focusing on body image issues. These measures include reducing recommendations for content that idealizes body types or displays social aggression.
In addition to content restrictions, YouTube is boosting well-being features like “take a break” reminders and a full-page crisis resource panel for sensitive searches. This panel aims to provide direct support and guide users towards positive content, such as self-compassion. This effort is part of YouTube’s broader initiative, in partnership with the World Health Organization and Common-Sense Networks, to develop educational resources for both parents and teens about safe and empathetic online interactions.
YouTube’s initiatives also aim to address the complexities of AI-generated content, with new AI-generated content policies addressing enhanced transparency and user protection. The platform will inform users if they are viewing content created or edited by AI, particularly focusing on realistic deepfake videos that mimic identifiable individuals. This requires creators to explicitly disclose when they generate synthetic or altered content that is designed to appear realistic and is created using AI. For instance, videos that misrepresent the actual occurrence of an event or portray individuals saying or doing things they never did. Creators who violate these rules risk having their content taken down, and they may also face suspension or financial sanctions.
Moreover, the policy enables users to request the removal of content that replicates an identifiable person by their facial or vocal characteristics, using the platform’s specified process for filing privacy complaints. The removal option will be accessible to music companies and distributors if the content replicates the voice of the artists they represent.
Meta Platforms (formerly Facebook) is also establishing new content policies for political ads. This policy requires advertisers to disclose when political ads have been altered using AI or other software. This move, effective in 2024, is particularly pivotal for the 2024 U.S. presidential election in light of the rise of generative AI technologies like ChatGPT. The policy also prohibits the use of Meta’s AI ad-creation tools for political purposes.
However, Meta has clarified that advertisers won’t be required to disclose minor or insignificant content alterations, such as color adjustments or cropping. Violations could lead to ad rejection and, in cases of repeated violations, further sanctions.
Click here to read more about YouTube’s measures to deal with AI deepfakes.