Key Points
- OpenAI is deploying a global age prediction system for ChatGPT accounts.
- The model uses behavioral and account‑level signals to estimate user age.
- Incorrectly flagged users must verify age via a selfie on the Persona platform.
- The move follows criticism of AI firms adding safety features after incidents.
- OpenAI faced a wrongful‑death lawsuit linked to a teen’s use of ChatGPT.
- An “adult mode” for NSFW content is also being prepared.
- Concerns exist that minors may try to bypass the new protections.
- Similar age‑restriction challenges have appeared on platforms like Roblox.
OpenAI Launches Age Prediction for ChatGPT
OpenAI is the latest company to join the growing trend of restricting access based on users’ age. The firm is beginning a global rollout of an age prediction tool designed to determine whether a ChatGPT user is a minor. According to the company’s announcement, the model examines a combination of behavioral and account‑level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age. The exact wording from the announcement reads: “The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time,and a user’s stated age,”.
If the system incorrectly categorizes an individual as underage, the user will be required to submit a selfie to correct the mistake through the Persona age verification platform. This verification step aims to ensure that the age estimate can be challenged and validated, reducing the risk of wrongful restriction.
Context and Industry Practices
The introduction of the tool comes amid broader criticism of artificial‑intelligence companies for introducing new features before implementing comprehensive safety safeguards. Observers note that many firms tend to layer on protective measures only after a feature causes harm. In OpenAI’s case, the company was implicated in a wrongful‑death lawsuit involving a teenager who allegedly used ChatGPT to plan suicide. Following that incident, OpenAI began considering automatic content restrictions for underage users and launched a mental‑health advisory council.
Future Plans and Potential Risks
OpenAI is also preparing to launch an “adult mode” that would allow users to create and consume content classified as not safe for work (NSFW). The introduction of such a mode raises concerns that minors might seek ways to circumvent the age‑verification system in order to access restricted material. Similar challenges have been observed on platforms like Roblox, where efforts to protect younger users have sometimes been sidestepped by determined adolescents.
While the age prediction tool represents a proactive step toward safeguarding minors, the effectiveness of the system will depend on the accuracy of its signals and the robustness of the verification process. Critics argue that any technical solution must be paired with ongoing policy oversight and user education to prevent misuse.
Implications for the AI Landscape
The rollout signals a shift in how AI providers approach user safety, moving from reactive patches to more anticipatory measures. By embedding age‑estimation directly into the user experience, OpenAI hopes to balance the openness of its conversational model with the responsibility of protecting vulnerable users. The industry will likely watch closely to see whether the tool reduces incidents involving minors and how effectively it can be enforced without imposing undue friction on legitimate users.
Source: engadget.com