Key Points
- Senators Josh Hawley and Richard Blumenthal introduce the GUARD Act.
- AI companies must verify user age before allowing chatbot access.
- Individuals under 18 would be prohibited from using AI chatbots.
- Chatbots must regularly disclose they are not human.
- Creation of sexual or self‑harm content for minors would be illegal.
- The bill calls for criminal and civil penalties for violations.
- Lawmakers cite child safety and prevention of manipulative AI.
- The proposal follows Senate hearings on AI risks to youth.
 
Background
In response to growing concerns about the impact of artificial‑intelligence chatbots on young people, Senators Josh Hawley (R‑MO) and Richard Blumenthal (D‑CT) have introduced a bill that would impose strict safeguards on AI developers. The legislation follows recent Senate hearings where safety advocates and parents highlighted potential risks associated with AI chat interfaces.
Key Provisions of the GUARD Act
The proposed law requires AI companies to verify the age of every user before granting access to chatbot services. Verification could involve uploading a government‑issued ID or using other reasonable methods such as facial scans. The bill explicitly bans anyone under the age of 18 from using AI chatbots.
In addition to age verification, the legislation mandates that chatbots disclose, at regular intervals, that they are not human. The bill also requires safeguards to prevent chatbots from claiming they are human. Finally, the act makes it illegal for any chatbot to produce sexual content aimed at minors or to promote self‑harm.
Lawmakers’ Rationale
Senator Blumenthal described the proposal as a “strict safeguard against exploitative or manipulative AI,” emphasizing the need for enforcement mechanisms that include criminal and civil penalties. Senator Hawley echoed concerns that without regulatory oversight, AI companies may prioritize profit over child safety.
Reactions and Outlook
The bill has drawn attention from both technology firms and child‑protection groups. Proponents argue that age verification and clear disclosures are essential to protect vulnerable users, while critics warn that the requirements could impose significant compliance costs on AI developers. The legislation is expected to undergo debate in the Senate, where further amendments may be considered.
Potential Impact
If enacted, the GUARD Act would set a new standard for AI accountability in the United States, potentially influencing global approaches to AI regulation. By restricting access for minors and enforcing transparency, the law aims to reduce the risk of harmful interactions between children and AI chatbots.
Source: theverge.com
 
					