On November 26, 2025, the European Parliament adopted a non-legislative report with 483 votes in favor, 92 against, and 86 abstentions. The report expresses grave concern over the physical and mental health risks minors face in online environments. It recommends that the EU uniformly set the minimum age for accessing social media, video-sharing platforms, and AI companion chatbots at 16 years. Teens aged 13 to 16 would require parental consent to use these services.
The proposal seeks to ensure age-appropriate online engagement. It calls for urgent action against ethical and legal challenges posed by generative AI tools. These include deepfakes, companion chatbots, AI agents, and AI-driven applications (such as generating manipulative images without consent). The report specifically urges restrictions on manipulative designs and addictive features. Examples include infinite scrolling, autoplay, pull-to-refresh, reward loops, and harmful gamification. Such elements can exacerbate online addiction. They may impair children's attention spans and healthy digital habits. Although non-binding, the report will influence future EU co-legislation. It may prompt platforms to proactively revise policies to better safeguard minors.
Countries worldwide are increasingly recognizing the risks of minors using AI. They are introducing protective measures. On October 13, 2025, California's governor signed Senate Bill 243. It mandates chatbot developers to display clear, prominent notices stating the product is fully AI-operated, not human-interacted. The law requires such notices at least every three hours. Starting July 2027, operators must submit annual reports to the California Office of Suicide Prevention. Affected users may file civil lawsuits for injunctive relief and damages of at least $1,000.
On October 25, 2025, Australia's eSafety Commissioner issued formal notices under the Online Safety Act. These were sent to four providers of AI companion chatbots: Character.AI, Nomi, ChAI, and Chub.AI. The operators must explain their safeguards. These prevent children and adolescents from exposure to sexually suggestive conversations, sexual images, suicide, or self-harm content. The goal is to secure online safety for minor users.
The international community is heightening scrutiny of generative AI and digital platforms' impact on minors. Through age restrictions, disclosure mandates, and reporting obligations, nations are collaborating via legislation and regulation. They aim to create safer digital spaces. This mitigates AI's potential harms. It ensures the next generation explores technology in healthy, protected conditions.