OpenAI is giving cybersecurity professionals new AI superpowers through a trusted access system built around its most advanced cyber-focused model, GPT-5.3-Codex.
Quick Summary – TLDR:
- OpenAI launched Trusted Access for Cyber, enabling verified users to leverage GPT-5.3-Codex for advanced cybersecurity tasks.
- The model can autonomously detect, patch, and defend against vulnerabilities in real-world systems.
- Access is granted through identity verification tiers for individuals, enterprises, and researchers.
- $10 million in API credits will support security teams working on open-source and critical infrastructure protection.
What Happened?
OpenAI has introduced a new access framework aimed at putting its most powerful AI models into the hands of defenders first. The program, called Trusted Access for Cyber, is designed to boost cybersecurity efforts while minimizing the risk of misuse.
At the center of the initiative is GPT-5.3-Codex, a cutting-edge AI model that can autonomously operate for extended periods to complete complex security tasks, like vulnerability detection, patching, and malware analysis.
This is our first model that hits “high” for cybersecurity on our preparedness framework.
— Sam Altman (@sama) February 5, 2026
We are piloting a Trusted Access framework, and committing $10 million in API credits to accelerate cyber defense.https://t.co/vN8KpIuzjt
A Game Changer for Security Professionals
The capabilities of GPT-5.3-Codex go far beyond traditional code assistance tools. It is engineered for autonomous, full-spectrum cybersecurity work:
- Scanning and analyzing full codebases
- Simulating attack vectors
- Generating remediation scripts
- Modeling lateral movement in enterprise networks
- Reverse-engineering malware payloads
Early benchmarks show a 40 percent reduction in false positives compared to standard static analysis tools, thanks to its ability to chain multiple reasoning steps like fuzzing inputs, correlating IOCs, and applying CVSS scoring.
Tackling the Double-Edged Sword of AI in Cyber
OpenAI acknowledges that cyber tools are often dual-use. A prompt like “find vulnerabilities in my code” could aid in patching or in exploitation. To manage that gray area, OpenAI is using tiered identity verification:
- Individuals can gain access through KYC verification at chatgpt.com/cyber.
- Enterprises can request trusted access for entire teams via OpenAI representatives.
- Security researchers can apply to an invite-only program for advanced, high-risk defensive work.
Built-in Safeguards and Monitoring
To prevent malicious use, GPT-5.3-Codex includes strong built-in protections:
- Refusal training on more than 10 million adversarial prompts.
- Real-time classifiers that detect obfuscation tactics like encoded payloads.
- Activity monitoring to flag suspicious behaviors such as bulk vulnerability scanning.
Prohibited activities include:
- Data exfiltration
- Malware creation or deployment
- Unauthorized penetration testing
All users must follow OpenAI’s strict Usage Policies and Terms of Use, and violations may lead to bans.
$10M Grant Program to Boost Open Source Security
To support legitimate cybersecurity work, OpenAI is offering $10 million in API credits through its Cybersecurity Grant Program. The program will prioritize:
- Teams with experience in vulnerability discovery.
- Contributors to open-source software.
- Defenders of critical infrastructure systems.
Selected teams will also provide feedback loops to help OpenAI refine the Trusted Access program.
SQ Magazine Takeaway
Honestly, I think this is one of the most responsible and impactful uses of frontier AI we’ve seen. OpenAI isn’t just building powerful models; it’s building systems of trust to ensure those tools are used to defend, not destroy. By combining identity verification, usage policies, and real-time monitoring, they’re creating a safety net that doesn’t strangle innovation. If you work in cybersecurity, this program is worth watching or getting involved in.