We already have credible public signals that AI-assisted systems can help discover real-world vulnerabilities in widely used open source components. Google Project Zero and Google DeepMind disclosed that an AI agent called Big Sleep found an exploitable vulnerability in SQLite, and maintainers fixed it the same day it was reported. Google’s security team also described AI-assisted fuzzing work that reported new vulnerabilities to open source maintainers, including one in OpenSSL. DARPA’s AI Cyber Challenge was built around the same direction of travel, which is automated vulnerability discovery and patching at scale.
As discovery accelerates, the time between unknown and exploited compresses. That weakens any security model built around periodic assurance. Annual penetration tests and quarterly scans still matter, but they cannot be the backbone of resilience when a motivated adversary can probe continuously, adapt quickly and never get tired.
Reducing the value of the inevitable breach
Resilience begins with data minimization. If an internet-facing service does not need raw sensitive data, it should not be able to retrieve it. Tokenization and non-reversible storage, among other approaches, reduce the value of a successful breach. You cannot lose what you never collected, and you cannot leak what the service cannot see.
