With attackers able to move at AI speed, defenders can’t rely on the techniques and instincts they’ve come to trust. Even the best of best practices won’t meet the threat, said speakers at the recent SecureWorld conference in Boston.
An organization that wants to be resilient in the AI age needs to detect and fend off malicious activity as it occurs.
“That means putting in place stronger identity controls,” said Jack Butler, a senior enterprise solutions engineer at Sumo Logic, a SecOps vendor. “That means putting in place the more robust logging program and correlation engines to detect across all of these in real time and reassess signals of trust. It needs to be reassessed dynamically.”
Identity protection needs to meet the threat
As for what to do about the substantial challenge of managing identities associated with people, machines and AI agents, panelists at SecureWorld emphasized visibility.
“Know what is in your environment, and know what it is doing,” recommended Chandra Pandey, CEO of Seceon, a security vendor. “If you know what is in your environment with machines, humans and all that — in real time — and you know what you’re doing, you have done 80% of your work.”
Reckoning with all that discovery isn’t easy, especially with the nearly incalculable numbers of nonhuman identities (NHIs) in use in modern IT environments. Machine identity management and NHI security pose a big and growing challenge for security teams.
“Make sure that you’re really asking yourself: What systems do you have — human and nonhuman identities — and what they have access to,” Butler said. “Make sure that you are assuming zero trust. You’re going to get pwned, and, when you do, they’re going to take access.”
“Start with AI agents,” advised Kelsey Brazill, vice president of market strategy at P0 Security, an identity security vendor. “They’re new, so there’s less baggage there, and it’s easier to implement some best practices and standards. And then that sets you up to extend that to all of the NHIs in your system.”
SOC analysts have seen AI used against them for a while, but defenders haven’t shifted their thinking enough to fully confront AI’s weaponization, said Patricia Titus, field CISO at security vendor Abnormal AI.
“Stop constantly looking for indicators of compromise,” Titus recommended. “By the time somebody gets hit and your SOC analysts write a rule and plug it into your systems, it could already be too late for your organization. We have to start thinking a little bit differently and start looking at attributing behavior.”
With AI’s help, threat actors can be deliberate about who they target. This means attackers rely less on classic, spray-and-pray intrusion attempts, Titus said, and can instead use AI to quickly cull through vast amounts of data to craft specific attacks on a particular individual. Those highly targeted tactics tend to be more successful.
Fayyaz Rajpari, senior director of GSI at SaaS security vendor AppOmni, said he has seen many compromises in the past year that had nothing to do with humans and instead involved cloud services, SaaS, NHIs, tokens and AI agents. That type of malicious behavior is hard to defend against, he said. “You have to start figuring out how you can leverage AI against these AI-generated attacks and interconnected systems. It’s difficult, but that’s just the reality.”
Can AI agents be secured?
AI agents are good at evading whatever guardrails cybersecurity teams put in place. “Their job is to finish a workload. If they have to go around to the backdoor and beg another agent to give them access, which we’ve already seen, they will get granted access,” Titus said.
To respond, teams need to design AI models that will mask data and take other protective measures, said Peter Steyaert, a senior manager of systems engineering at Fortinet. “You’re going to have to limit exposure. It’s going to have to be an accepted risk level through accepted LLMs, which means you’re going to have to build a trusted model. Ensure what you’re using internally is trusted.”
That trust won’t develop easily, Steyaert said, and there will need to be a meeting of the minds involving a CISO, CIO, the legal department and others to agree on how much risk an organization is willing to accept.
When it comes to risks posed by AI agents, visibility isn’t enough. Configuration management tools need to be capable of spotting a suspicious agent as soon as it appears, he said, and security teams need to be prepared to act.
“It’s not just detecting. You have to discover it, monitor in real time, kill it,” Pandey said. Aggressive actions might occasionally disrupt an organization’s legitimate use of AI agents, Pandey acknowledged, but the resulting productivity hit is insignificant when compared with the damage a threat actor can do by maliciously using an AI agent.
Bart Lenaerts, product marketing manager at Infoblox, a networking and security vendor, said security teams have little appetite for adding new tools and incurring additional costs. Lenaerts touted the usefulness of standards, which could enable users to register an AI agent in ways similar to how a server is registered. That control can change the security equation. “You’re going to get the visibility. You’re going to be able to make decisions on what you’re going to shut down. And you know exactly what data sovereignty you can build into it,” he said.
To an extent, defenders are being pushed to take more risks with their defensive AI agents, said Lewis Foggie, a sales engineer at SecureFlag, a security code training company. He pointed to a recently observed breakout time of just 27 seconds as an example of how breach response has fundamentally changed. “Humans can’t respond to that in time,” Foggie said. “Our agents will need to have some level of autonomy to conduct rapid containment.”
Granting that autonomy, of course, means accepting higher levels of risk. “Who knows when that agent is going to go conduct some other operation that could be catastrophic for the business,” Foggie added.
Phil Sweeney is an industry editor and writer focused on cybersecurity topics.
