KubeCon EU 2026 made one thing clear: the cloud-native ecosystem is evolving quickly, but not always evenly.
Across talks, booths, and conversations, topics like AI, platform engineering, and sovereignty dominated the agenda. But beyond the headlines, a more practical story emerged—one that reflects the day-to-day reality of platform teams trying to operate increasingly complex systems.
Here’s what stood out to us.
AI is expanding the scope of Kubernetes
AI was everywhere at KubeCon EU, but what’s more interesting is how it’s changing the role of Kubernetes.
What was once primarily used for stateless application workloads is now being pushed into managing highly stateful, compute-intensive systems. Teams are running training jobs, inference pipelines, and data-heavy workloads—all within Kubernetes environments.
This shift is driving new infrastructure requirements. GPU-based setups are becoming more common, with tooling around accelerated computing now part of the standard conversation. At the same time, interest in data systems like vector and graph databases is growing, especially as organizations look to connect internal data with AI-driven applications.
The lesson here isn’t just that AI is popular: The takeaway is that it’s expanding the operational scope of Kubernetes in ways that introduce new dependencies and complexity.
Sovereignty is becoming a design constraint
At a European event, sovereignty was always going to be part of the conversation, but this year, it felt different.
Rather than abstract discussions, we saw concrete examples of organizations building and operating sovereign platforms. Teams in regulated industries are actively designing systems that allow them to maintain control over data, infrastructure, and operations, often across a mix of private and public environments. And they’re being quick about it!
What’s really notable is how this is being achieved. Instead of relying solely on hyperscaler offerings, many organizations are assembling platforms using open-source cloud-native technologies. Kubernetes plays a central role, acting as the foundation for portability and control.
This shift introduces additional requirements for platform teams. They no longer need to worry solely about scalability or reliability, but now also about ensuring that systems remain compliant, portable, and independent of specific providers.
Cost is now part of the architecture conversation
With the rise of AI workloads, cost is becoming harder to ignore.
Running large-scale data processing and model workloads comes with significant infrastructure demands, and teams are starting to feel the impact. As a result, cost is no longer treated as a downstream concern; it’s becoming part of architectural decision-making.
This is reflected in the growing interest in cost optimization and observability. Many teams are looking for ways to better understand how resources are being used, where inefficiencies exist, and how to reduce unnecessary spend.
At the same time, there’s a push toward standardization in observability, with OpenTelemetry continuing to gain traction as a way to unify how telemetry data is collected and analyzed.
Platform engineering is about unification, but the reality is fragmented
Platform engineering continues to gain momentum, with most teams working toward some form of internal developer platform. The goal is clear: provide standardized, self-service access to infrastructure and services.
But in practice, things are rarely that simple.
Most organizations operate across a mix of environments. Kubernetes is part of the picture, but so are virtual machines, legacy systems, and managed services from cloud providers. Data services in particular tend to be spread across these different layers.
This creates a fragmented operating model. Even when teams adopt modern tooling, they often end up with multiple workflows, inconsistent interfaces, and duplicated effort across environments.
The challenge is building a platform that’s coherent.
Data services are still a friction point
One of the most consistent themes in conversations was how data services are managed today.
Despite investments in automation and platform tooling, many teams still rely on manual processes—especially when it comes to provisioning and managing databases. Ticketing systems, ad hoc workflows, and environment-specific processes are still common.
This is where the gap between ambition and reality becomes most visible.
Teams want self-service. They want automation. They want consistency across environments. But when it comes to data services, these goals are often harder to achieve.
Part of the reason is that data services don’t fit neatly into a single model. Some run in Kubernetes, others on virtual machines, and others are consumed as managed services. Each comes with its own operational model, making standardization more difficult.
What this means going forward
KubeCon EU 2026 highlighted how quickly the cloud-native ecosystem is moving, but also where things are lagging behind.
AI is increasing the demands placed on infrastructure. Sovereignty is adding new constraints. Platform engineering is raising expectations around self-service and consistency.
At the same time, core operational challenges, especially around data services, are still unresolved.
For many organizations, the next phase isn’t about adopting new tools. It’s about connecting what they already have into a model that is consistent, automated, and scalable across environments.
That’s where we see the biggest opportunity; helping teams move from fragmented operations to a more unified way of managing platforms and data services without being locked into a single infrastructure model.








