The incident involving Madhu Gottumukkala, the Acting Director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), highlighted the difference between being allowed to use a tool and controlling what data is fed into it. Last summer, Gottumukkala uploaded internal agency contracting documents marked “for official use only” into the public version of ChatGPT. The materials were considered sensitive and not intended for public release.
CISA’s monitoring systems detected the uploads and automatically generated several cybersecurity alerts. This triggered an internal review at the level of the U.S. Department of Homeland Security to assess potential harm to national security. The key issue was not only the act of uploading itself, but also the channel involved: a public AI service where any data is transmitted to a third-party provider and may potentially be used for training or to generate responses for other users.
Paradoxically, the incident occurred despite the fact that access to ChatGPT for agency staff was blocked by default. The head of the agency was granted a temporary exception, while other internal DHS AI-based tools (such as its in-house chatbot) are technically designed to prevent data from leaving federal networks. Human factors and usage context remain the weakest link, even in mature cybersecurity environments.
At present, Gottumukkala is the most senior political official at CISA—an agency responsible for protecting federal networks from sophisticated, state-sponsored hackers from adversarial countries, including Russia and China.
The incident sparked debate not only about potential disciplinary accountability but also about the boundaries of acceptable use of public AI tools in government and other critical environments. Granting “exceptional” access without strict, context-aware controls can easily become a direct path to unintended data leakage.
How This Could Have Been Prevented
Such situations could have been prevented by a solution like Netwrix Endpoint Protector with the Context Aware Protection module. This approach enables not only control over data transfer channels but also evaluation of context. That context includes the type of information, its labeling, potential exit points (for example, a public AI service), the user’s role, and organizational policies. As a result, even an authorized employee would be unable to transfer a file marked “for official use only” to an external AI tool—the action would be automatically blocked or recorded with immediate notification to the security team.







