How conversational analytics removes the business intelligence bottleneck
Cybersecurity companies face a paradox. Their customers keep adding more security tools, expecting more protection. But the data increasingly shows that tool sprawl makes organizations slower to detect and respond to threats. At the same time, AI is accelerating both sides of the equation: giving defenders new capabilities while making it dramatically easier for attackers to operate at scale.
For over twenty years, Barracuda has protected organizations from evolving threats with its BarracudaONE cybersecurity platform, which maximizes cyber resilience by unifying protection across email, ,data, networks, applications, and managed XDR. Barracuda uses Databricks for its enterprise data platform, consolidating fragmented data silos to power ML operations, real-time threat correlation, and business intelligence. Using Databricks Genie, the team quickly developed and launched features like natural language log search for its managed XDR solution, allowing customers to query billions of security events in plain language while maintaining strict data isolation.
Neal Bradbury is Chief Product Officer at Barracuda, responsible for product management, engineering, security, and cloud operations. He has led the shift toward what Barracuda calls AI-native product development, in which intelligence is built into the core of every application rather than added as an interface on top.
The thread running through our conversation was consistent: in an era where attackers operate at scale, the defenders winning with AI are those treating their proprietary security telemetry as a strategic asset. They aren’t just adding AI tools; they are building intelligence directly into the data layer to stay ahead of evolving threats.
What AI-native actually means
Aly McGue: How do you define an “AI-native application” in your business versus a traditional application? What’s the strategic difference for the customer experience?
Neal Bradbury: For us, AI-native means it’s built in, not bolted on. The application must be architected with AI at its core. In security, that means observability, governance, access controls, and enforcement, all built in from day one. We have our Bailey AI Assistant, but the core of how our applications work, whether it’s our WAF or our email protection, they are AI-native at their foundation.
The other big distinction is that AI-native applications continuously adapt. A traditional application is built a certain way, and it operates that way until someone goes in and changes it. An AI-native application is more dynamic. It responds to changing customer data, changing needs, and changing goals. It meets the customer where they are as things evolve, which matters a lot when the landscape is moving as fast as it is right now.
In our case, we’re collecting threats and risks from customers across the BarracudaONE platform. Every customer has a different risk profile. Every customer needs different threats prioritized. So it can’t be rigid. That’s really the strategic difference: an AI-native solution adapts to each customer rather than forcing everyone down the same deterministic path.
Embedding Intelligence into the Security Stack
Aly: What did it take to re-architect your core product and embed AI-native features like personalization, recommendation engines, or copilot tools?
Neal: I’d go back to our managed XDR solution as an example. We had to really question the focus and purpose of that offering, and then work backward. What problem are we actually solving? What outcome are we delivering for the customer? Any product manager should start there, but it becomes even more critical when you’re embedding AI, because the architecture decisions you make early determine what’s possible later.
The foundational piece was organizing the data layer. If your data is all over the place or the schema isn’t shared, it just causes problems downstream for everything. Being able to normalize the schema enabled our machine learning models and agents to have full context across domains and actually to do what we needed them to do.
We were also disciplined about taking small bites. We didn’t try to migrate everything at once. We started with small pieces, iterated, and worked our way toward the full outcome. You can come up with a fancier way to describe it, but it was: understand what the output needs to be, then iterate your way there.
What came out of that process was real-time streaming detection built with notebooks, ML operations running through MLflow, and multiple machine learning models with 30-plus features that continuously improve. And the exciting part is that we’ve been able to extend that same platform pattern to other products: our WAF-as-a-service, our automated configuration engine, API security, and advanced bot protection. So the investment compounds.
Aligning teams around outcomes, not tools
Aly: How did you successfully align product, data science, and engineering teams to work from a shared data and AI platform to accelerate time to market for these features?
Neal: I’ll sound like a broken record, but it really came down to defining shared outcomes first. Take our impersonation protection feature in Barracuda Email Protection, which protects customers against advanced attacks. The outcome wasn’t simple, but it was clear. And that clarity meant teams could drive toward a unified goal without getting lost in tooling debates. We had Databricks as the platform, we had a destination, and we could just execute.
The same logic applies when we work across non-engineering functions. When we went after churn reduction, we needed customer information, product telemetry, and sales data. Being able to bring all of that together in one enterprise data platform and actually see cross-functional insights is what drove alignment. It wasn’t a mandate from the top. It was a shared outcome that everyone could see and measure. That’s what moves people.
Why your data layer is the real differentiator
Aly: How does building AI-native applications on your own data layer give you a deeper, more defensible competitive advantage compared to relying solely on external SaaS models?
Neal: Your own data layer is the differentiator. Full stop. AI agents are only as strong as the proprietary, context-rich data they can access. When you build on your unified security telemetry, you create an advantage that generic SaaS models just can’t replicate.
Because we build on our own data, we can customize for the specific telemetry and insights we’re getting across the entire security portfolio. That lets us provide targeted recommendations and make decisions alongside our customers in ways that a one-size-fits-all external model never could.
The way I think about it is this: an AI-native product can use customer-specific deployment and behavior context to adapt and respond in ways an external SaaS AI simply cannot. And that advantage compounds. The more data flows through the system, the better it gets at understanding each customer’s unique environment. Nobody can shortcut their way into that.
Closing Thoughts
What came through most clearly in this conversation is that AI-native is an architectural commitment, not a feature label. Neal draws a line between products that have AI designed into their foundation and products that add an intelligent interface on top of a traditional system. The difference shows up in how dynamically the product adapts, how well it uses proprietary context, and how defensible the result is over time.
For executives evaluating their own product strategies, the question worth sitting with is: Is intelligence built into the core of what you ship, or is it layered on top? The answer determines not just what your product can do today, but how fast it can evolve when the landscape shifts again.
To learn more about building an effective operating model, download the Databricks AI Maturity Model.
