AI workloads are exposing a mismatch in how most teams have built their data platforms.
You see it whether you are building agentic apps, shipping conversational analytics, or using AI to speed up incident response. Suddenly, the database has to handle far more concurrent queries, return answers in well under a second, and retain much more granular data for much longer. Systems built for batch reporting and periodic dashboards start to look out of step with the job at hand.
This is already happening across three areas that are converging faster than most teams expected: application development, business analytics, and observability.
Agents don’t query like humans
The move from human-driven to agent-driven analytics may be the biggest shift in database workload patterns in the last decade.
When a person asks a question in natural language, the model behind the scenes usually does not issue one tidy SQL query. It can trigger dozens in rapid succession while it explores the schema, tests different paths, and reasons through multiple possibilities in parallel. One prompt turns into a burst of concurrent queries. Analyst workloads start to resemble customer-facing production traffic: high concurrency, low latency, interactive response times.
“The move from human-driven to agent-driven analytics may be the biggest shift in database workload patterns in the last decade.”
That breaks some of the assumptions on which traditional cloud data warehouses were built. Those systems are generally optimized for throughput on relatively infrequent, heavyweight queries, not thousands of short, concurrent ones. Put AI analyst workloads on top of that architecture, and you usually end up in one of two bad places: latency that makes the assistant feel sluggish, or costs that rise faster than the value you are getting.
This is why real-time analytical databases built for interactive workloads are starting to look less like a nice-to-have and more like the natural fit. The rise of MCP servers that expose databases directly to agents, analytics bots in Slack, and open-source agentic architectures gives a pretty clear picture of what production agentic analytics looks like in practice: natural language in, SQL out, answers back in seconds, with the database quietly handling the concurrency.
Postgres + OLAP Is becoming the default
One of the clearest signals in the market is the growing consensus around a simple architecture: Postgres for transactions, paired with a columnar OLAP engine for analytics. GitLab described this pattern back in 2022, and it has increasingly become the default open-source stack for scaling agentic AI applications.
Postgres handles row-oriented transactional workloads. ClickHouse, or another columnar engine, handles the analytical side: fast ingestion, sub-second queries across very large datasets, and the concurrency that AI-powered features demand.
AI makes this architecture feel less optional and more urgent. Features like AI-generated insights, natural-language product interfaces, and autonomous analysis all depend on a much tighter loop between transactional writes and analytical reads. The closer the integration between those two layers, the faster teams can ship useful products rather than fight their plumbing.
Observability runs into the same problem
Observability is running into the same architectural problem.
The classic three-pillar model, with metrics, logs, and traces stored separately, was shaped by an era when storage was expensive, and query patterns were more predictable. AI-driven SRE workflows do not fit that model very well.
They need granular, high-cardinality data with long retention so an agent can triage incidents, correlate signals, and work backward to a root cause. Sampled logs and aggressively rolled-up metrics are a poor substrate for that kind of reasoning. If an AI agent is trying to connect an error spike to a deployment event from three days earlier, the real constraint is often not the model, but the missing data.
This is the shift Charity Majors has described as Observability 2.0: wide, structured events in a columnar engine, with metrics and traces derived at query time rather than precomputed in advance. A growing number of modern observability companies have moved in this direction. Traditional vendors are stuck with an uncomfortable tradeoff: their per-GB pricing pushes customers to ingest less data, which is the opposite of what AI-heavy workflows need.
Two categories and one set of requirements
For years, observability and data warehousing were treated as separate categories, with different buyers, budgets, and tooling. Technically, though, they are starting to look a lot alike.
Both write into object storage. Both need low-latency, high-concurrency queries. Both are layering on AI-driven analysis. And the underlying data overlaps more than most teams assume. API calls can also be purchased. Errors can also be failed transactions. Open table formats like Iceberg are making this convergence much more practical, with columnar databases serving as the fast query layer on top.
The cost of waiting is going up
The database market is being redrawn around the requirements AI workloads impose: high concurrency, real-time performance, full-fidelity retention, and direct accessibility for agents.
Columnar analytical databases built for interactive workloads are in a strong position because those requirements line up with what they were designed to do. But the bigger point is architectural, not just vendor-specific.
“The cost of migrating off legacy platforms is real, but finite. The cost of spending the next five years on a platform that cannot handle agentic query volumes is not.”
Teams will need tight integration between transactional and analytical systems, as in the Postgres + OLAP pattern. They will need native agent interfaces, such as MCP, so AI systems can access the data without layers of bespoke glue code. And they will need LLM observability tooling to trace, evaluate, and govern agent behavior in production.
The cost of migrating off legacy platforms is real, but finite. The cost of spending the next five years on a platform that cannot handle agentic query volumes is not.
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.
SUBSCRIBE
Group
Created with Sketch.
Alasdair Brown has spent the past decade designing, building and operating data platforms, from user-facing, real-time analytics for top brands, to some of the world’s largest nation state cyber-defense systems. He is an advocate for simple data architectures and often…
Read more from Alasdair Brown
