OpenAI has officially launched the GPT-5.5 Bio Bug Bounty program to strengthen safeguards against emerging biological risks.
As artificial intelligence models become more advanced, the potential for malicious actors to generate dangerous biological information increases. Advanced persistent threats (APTs) and lone attackers could potentially misuse large language models to accelerate harmful biological research.
To address this evolving threat landscape, OpenAI is actively inviting cybersecurity researchers, biosecurity experts, and AI red teamers to test the limits of its latest model.
The primary goal is to identify critical vulnerabilities and logic flaws before threat actors can exploit them in the wild.
The Universal Jailbreak Challenge
The core technical challenge revolves around finding a “universal jailbreak” for the GPT-5.5 model. In the context of AI security, a jailbreak is a highly specialized prompt designed to bypass the model’s built-in safety filters and ethical guardrails.
Participants must craft a single, universal prompt that forces the model to answer a strict five-question bio-safety challenge successfully.
Researchers must execute this attack from a clean chat session without triggering any automated moderation warnings or backend alerts.
This requires deep expertise in prompt engineering and an understanding of how language models process complex biological queries. The testing environment for this specific bounty is strictly limited to GPT-5.5 running within Codex Desktop.
Bounty Rewards and Testing Timeline
Finding a true universal jailbreak is highly difficult, and the financial payout reflects the task’s complexity.
The program operates on a strict timeline with specific reward tiers for successful vulnerability disclosures.
Key details of the program include:
- A top prize of $25,000 goes to the first researcher who successfully answers all five biosafety questions using a single prompt.
- Smaller, discretionary awards may be granted for partial successes that still provide valuable threat intelligence.
- Applications opened on April 23, 2026, and will be accepted on a rolling basis through the deadline on June 22, 2026.
- The active testing phase begins on April 28, 2026, and concludes on July 27, 2026.
Access to the Bio Bug Bounty program is highly restricted to ensure responsible disclosure and prevent the leak of sensitive biological data.
OpenAI is sending direct invitations to a vetted list of trusted bio red teamers while simultaneously reviewing new applications submitted through their official portal.
To apply, researchers must provide their full name, organizational affiliation, and relevant technical experience in either AI security or biology.
Accepted researchers must have an active ChatGPT account to join the testing platform. Because biological threat intelligence is highly sensitive, all participants must sign a strict Non-Disclosure Agreement.
This legal agreement completely restricts the public sharing of any testing data. Covered materials include all engineered prompts, model completions, security findings, and direct communications with the OpenAI engineering team.
This bio-specific initiative operates alongside OpenAI’s broader security and threat research efforts.
Cybersecurity professionals interested in finding traditional software vulnerabilities or other AI logic flaws can still participate in the ongoing Safety Bug Bounty and Security Bug Bounty programs.
By crowdsourcing advanced threat discovery, the organization aims to build far more resilient guardrails around frontier AI systems.
Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.
