Key Points
- CISA acting director uploaded “for official use only” documents to public ChatGPT.
- Internal alerts were triggered to prevent unauthorized disclosure.
- DHS staff normally use approved AI tools that keep data on federal networks.
- The uploaded material was unclassified but sensitive, potentially affecting privacy and national programs.
- Public AI tools like ChatGPT, with about 700 million users, present real data‑security risks.
- An investigation is underway to determine possible administrative or disciplinary measures.
Incident Overview
The acting director of the Cybersecurity and Infrastructure Security Agency (CISA), Madhu Gottumukkala, inadvertently uploaded documents designated as “for official use only” to the publicly accessible version of ChatGPT last summer. According to four Department of Homeland Security officials with knowledge of the incident, the uploads set off multiple internal cybersecurity alerts designed to stop the theft or unintentional disclosure of government material from federal networks.
Restricted AI Access and Policy
Gottumukkala sought special permission to use OpenAI’s chatbot, a tool that most DHS staff are blocked from accessing. Instead, DHS personnel typically rely on approved AI‑powered applications, such as the agency’s DHSChat, which are configured to prevent queries or documents from leaving federal networks. The request for access and the subsequent misuse raised concerns among officials that the director “forced CISA’s hand into making them give him ChatGPT, and then he abused it.”
Nature of the Leaked Information
The material uploaded was not classified but carried a “for official use only” label, a designation used within DHS to identify unclassified information of a sensitive nature. If shared without authorization, such information could adversely impact a person’s privacy or welfare and could impede programs essential to the national interest.
Potential Exposure and Risks
Because ChatGPT is a public service with roughly 700 million active users, there is concern that the uploaded data could be accessed or used to answer prompts from any of those users. Experts have warned that using public AI tools poses real risks, as uploaded data can be retained, breached, or incorporated into responses for other users.
Investigation and Possible Consequences
DHS has launched an investigation to assess whether the incident harmed government security. Officials indicated that the inquiry could lead to administrative or disciplinary actions ranging from a formal warning and mandatory retraining to suspension or revocation of a security clearance. OpenAI did not respond to requests for comment.
Source: arstechnica.com