Key Points
- OpenAI rolled out a global age‑prediction system for ChatGPT to automatically apply teen‑mode safeguards.
- The model uses behavioral cues, account history, usage patterns, and language analysis to estimate user age.
- When uncertain, the system defaults to caution, leading to some adult users being misclassified as teens.
- Misclassified adults face content restrictions and are prompted to verify their age via a third‑party tool.
- The verification process may request official ID or a selfie video, though OpenAI claims it never sees the documents.
- Users express privacy concerns and frustration over the invasive verification steps.
- OpenAI states all verification data is deleted after use and promises ongoing refinements.
- Similar age‑estimation attempts on other platforms have also drawn complaints from adult users.
Purpose and Deployment
OpenAI introduced an age‑prediction feature for ChatGPT with the goal of automatically identifying accounts that likely belong to users under 18. By doing so, the company intends to apply a dedicated teen‑mode experience that incorporates safety filters and content restrictions, especially as the platform expands into education, family settings, and creative projects for younger users.
How the Model Works
The system evaluates a combination of behavioral signals, account history, usage patterns, and occasional language analysis to estimate a user’s age. When the model encounters uncertainty, it errs on the side of caution, meaning newer accounts, late‑night usage, or inquiries about teen‑relevant topics can trigger the teen‑mode safeguards even for long‑standing paid subscribers.
Adult Users Misclassified
Several adult subscribers have reported being mistakenly placed in the teen experience. These users encounter restrictions that prevent them from discussing mature topics, such as wine pairings or other adult‑oriented subjects. The misclassification has led to frustration, as users must navigate additional steps to prove their age.
Verification Process
OpenAI directs affected users to a verification tool located in Settings > Account. The process uses a third‑party service called Persona, which may ask for an official ID or a selfie video to confirm the user’s age. OpenAI states that it never views the submitted ID or image; Persona simply returns a yes‑or‑no result indicating whether the account belongs to an adult. The company also asserts that all data collected during verification is deleted after the process.
User Concerns
Many users view the request for personal documents as invasive, raising questions about data collection, privacy, and the potential for more aggressive identity verification policies in the future. Some fear that submitted materials could be used for training the AI, despite OpenAI’s claim that this does not occur.
Industry Context
Similar age‑estimation tools have been attempted on platforms like YouTube and Instagram, often meeting with complaints from adults who feel wrongly classified. The heightened reliance on ChatGPT for daily tasks, from office work to therapy sessions, makes the impact of an invisible filter feel especially personal.
OpenAI’s Response and Outlook
OpenAI acknowledges the issue and says it will continue refining the model and improving the verification workflow based on user feedback. The company emphasizes its commitment to protecting younger users while aiming to minimize inconvenience for adults.
Source: techradar.com