Key Points
- OpenAI introduces an age‑prediction system for ChatGPT users who have not disclosed their age.
- The algorithm uses behavior signals such as account age and activity patterns to estimate if a user is under 18.
- Underage users can verify their identity through Persona with a live selfie and government‑issued ID.
- Content filters for under‑18 users block graphic violence, self‑harm, risky challenges, sexual or violent role‑play, and harmful beauty or dieting content.
- The initiative aligns with broader industry moves toward age verification, parental controls, and new regulations targeting youth online safety.
- Experts note high accuracy rates for modern facial‑recognition and age‑estimation technologies, but real‑world performance at scale remains uncertain.
- Advocates call for a holistic, safety‑by‑design approach that includes transparency, accountability, and digital‑literacy education.
OpenAI’s New Age‑Prediction Feature
OpenAI announced that ChatGPT will now employ an age‑prediction algorithm for accounts that have not disclosed a user’s age. The technology examines a range of behavioral cues, including how long an account has existed and when the user is active, to estimate whether the user is under 18. If the system flags a user as underage, the individual can confirm their age through a verification process offered by Persona, which requires a live selfie and a government‑issued identification document.
Expanded Safeguards for Younger Users
The age‑prediction tool is part of OpenAI’s broader effort to apply “safeguards to reduce exposure to sensitive or potentially harmful content.” The company’s support page details the types of material that will be filtered for under‑18 users, such as graphic violence, self‑harm depictions, risky viral challenges, sexual or violent role‑playing, and content that promotes extreme beauty standards, unhealthy dieting, or body shaming.
Industry Context and Legal Pressure
OpenAI’s initiative reflects a growing trend among technology platforms to implement age verification and parental controls. Recent actions by other companies, such as Roblox’s mandatory age checks and new legislation in Australia that bans social media for children under 16, illustrate the increasing regulatory and public scrutiny surrounding youth exposure to online content. Laws proposed or enacted in various U.S. states also encourage age‑based access restrictions.
Expert Views on Accuracy and Effectiveness
Jake Parker, senior director of government relations at the Security Industry Association, noted that modern facial‑recognition and age‑estimation algorithms can achieve high accuracy rates—over 99.5% for identity matching and more than 95% for age estimation—when properly implemented. However, Parker cautioned that the age‑prediction system’s performance across OpenAI’s massive user base, estimated at about 800 million weekly active users, remains to be seen.
Calls for a Holistic Approach
Kristine Gloria, chief operating officer of Young Futures, emphasized that technical solutions alone are insufficient. She advocated for “safety‑by‑design” practices that integrate transparency, accountability, and digital‑literacy education into platform development. Gloria argued that families need broader support to navigate the challenges presented by generative AI, rather than relying solely on monitoring tools.
Looking Ahead
OpenAI’s age‑prediction rollout adds a new dimension to its parental‑control features, but its ultimate impact will depend on the accuracy of the algorithm, user compliance with verification processes, and the industry’s collective commitment to protecting young people online. The move signals an ongoing shift toward more rigorous age‑based content moderation across digital services.
Source: cnet.com