OpenAI warns future AI models could heighten cybersecurity risks

Key Points

  • OpenAI warns future LLMs could aid in creating zero‑day exploits and advanced cyber‑espionage.
  • The company is investing in defensive tools and workflows for code auditing and vulnerability patching.
  • A tiered access program will give security teams enhanced capabilities while managing misuse risk.
  • OpenAI will form a Frontier Risk Council of cybersecurity experts to guide safeguards.
  • Participation in the Frontier Model Forum enables sharing of best practices with industry partners.

OpenAI admits new models likely to pose 'high' cybersecurity risk
A representational concept of a social media network

A representational concept of a social media network

Potential cybersecurity threats from future models

OpenAI announced that its next generation of large language models could, in theory, help develop working zero‑day remote exploits against well‑defended systems or meaningfully assist with complex and stealthy cyber‑espionage campaigns. The company described these emerging cyber capabilities as “advancing rapidly” and noted that they could be leveraged by any frontier model in the industry.

Defensive investments and tooling

To prepare for these risks, OpenAI said it is investing in strengthening models for defensive cybersecurity tasks and creating tools that enable defenders to more easily perform workflows such as auditing code and patching vulnerabilities. The approach includes a combination of access controls, infrastructure hardening, egress controls, and monitoring.

Tiered access program for security professionals

OpenAI plans to introduce a program that will give users and customers working on cybersecurity tasks access to improved capabilities in a tiered manner. This program is intended to provide security teams with the advanced features they need while managing the potential for misuse.

Frontier Risk Council

The company will establish an advisory group called the Frontier Risk Council, composed of seasoned cybersecurity experts and practitioners. The council will initially focus on cybersecurity, advising on the boundary between useful, responsible capability and potential misuse, and its insights will directly inform OpenAI’s evaluations and safeguards.

Collaboration through the Frontier Model Forum

OpenAI highlighted its participation in the Frontier Model Forum, where it shares knowledge and best practices with industry partners. The forum helps identify how AI capabilities could be weaponized, where critical bottlenecks exist for different threat actors, and how frontier models might provide meaningful uplift for both attackers and defenders.

Balancing risk and benefit

While acknowledging the heightened risk, OpenAI stressed that the advancements also bring “meaningful benefits for cyberdefense”. By investing in defensive tooling, tiered access, and expert advisory groups, the company aims to mitigate potential threats while supporting security professionals in protecting digital systems.

Source: techradar.com