Last week, we ran a simple experiment. We gave Claude Code (running locally with cluster access) a single prompt:
"I need a production-ready PostgreSQL cluster with 3 replicas on my EKS cluster. Set it up."
Without any additional context, the agent read our cluster state, installed the necessary operator, enabled the PostgreSQL addon, and applied a working YAML manifest. The database was up and running in under 4 minutes.
Then, we tried the exact same prompt, but this time we forced the agent to use a traditional, dedicated PostgreSQL operator. It took three correction rounds, two hallucinated fields, and a lot of token burning before the YAML finally validated.
Why the massive difference? It comes down to how AI agents actually "learn" to use Kubernetes APIs, and why the traditional "one database, one operator" model is fundamentally broken for autonomous operations.
When you ask an AI agent to deploy a database, it doesn't just magically know the YAML structure. It has to discover it.
Typically, the agent runs kubectl api-resources, finds the relevant Custom Resource Definition (CRD), and then runs kubectl explain (or pulls the OpenAPI schema) to figure out what fields are required.
For a human engineer, reading a new CRD schema is annoying but manageable. For an AI agent, it's a token-guzzling nightmare.
If your infrastructure relies on CloudNativePG for Postgres, Percona for MySQL, and the MongoDB Community Operator for Mongo, your agent has to explore and memorize three entirely different API dialects. It has to remember that CloudNativePG uses a Cluster CRD with an instances field for replica count, while Percona uses an entirely different PerconaServerMySQL CRD with its own schema structure.
As the agent tries to cram these massive, distinct schemas into its context window, it starts forgetting constraints, mixing up syntax, and hallucinating fields that don't exist — a failure mode that Anthropic's research on context engineering identifies as a core risk of bloated, unfocused context [1]. We saw this firsthand: our agent tried to apply a MySQL backup configuration to a PostgreSQL cluster simply because it got confused by the overlapping terminology.
This is where KubeBlocks completely changes the game.
KubeBlocks wasn't originally built for AI agents; it was built to stop human engineers from having to learn N operators for N databases [2]. It abstracts the commonalities of 30+ database engines into a single, unified API.
But it turns out, an API designed to reduce human cognitive load is exactly what an AI agent needs to succeed.
With KubeBlocks, the agent only has to learn one core schema: the Cluster CRD.
Once the agent figures out how to deploy a MySQL cluster using KubeBlocks, it has effectively achieved zero-shot generalization for PostgreSQL, Redis, or Kafka. It just changes the clusterDef and componentDef fields.
# The agent learns this structure once...
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: my-database
spec:
terminationPolicy: Delete
componentSpecs:
- name: mysql
componentDef: "mysql-8.0" # ...and just swaps this to 'postgresql-14' or 'redis' next time
replicas: 2
The token savings are massive. The agent doesn't have to re-explore the cluster or read new documentation every time you ask for a different database engine.
The real friction with traditional operators happens on Day 2. How do you scale up? How do you trigger a backup? Every operator handles this differently.
KubeBlocks standardizes these actions through unified primitives like the OpsRequest CRD. Whether the agent is restarting a Redis shard or vertically scaling a Kafka broker, the operational interface is identical. The agent learns the "control panel" once and applies it universally.
Even with a unified API, letting an agent blindly explore a cluster via kubectl explain is inefficient. To fix this, we recently introduced KubeBlocks Agent Skills — available on GitHub [3] and as a one-click installable skill on Termo [4].
These are modular, self-contained markdown files (SKILL.md) that act as explicit instruction manuals for AI agents like Cursor, Claude Code, or Codex.
Instead of guessing, the agent reads a routing file that says: "If the user wants MySQL, read skills/kubeblocks-addon-mysql/SKILL.md."
That specific file gives the agent exactly what it needs:
kubeblocks-vertical-scaling).It's a concept called Progressive Disclosure. The agent only loads the specific knowledge required for the immediate task, keeping its context window clean and its reasoning sharp. It doesn't need to know how MongoDB sharding works when it's just trying to back up Postgres.
I won't pretend it's flawless. During our tests, the agent still occasionally struggled with complex OpsRequest status tracking — sometimes assuming an operation was complete before the pods had actually restarted. We had to add explicit "wait and verify" loops into the Agent Skills to force the model to be patient.
But the baseline difference is undeniable.
If we are moving toward a future where agentic AI handles routine infrastructure provisioning and maintenance, we cannot keep building fragmented, bespoke operators for every single piece of software. The cognitive load is too high, even for LLMs.
KubeBlocks proves that a unified, declarative abstraction layer isn't just a nice-to-have for platform engineering teams — it is a prerequisite for autonomous operations.
Want to try it yourself? If you're using Cursor or Claude Code, clone the KubeBlocks Agent Skills repository and point your agent at the root SKILL.md. If you'd rather skip the setup entirely, you can launch a pre-configured agent on Termo in under a minute — no installation required.
[1] Anthropic. (2025). Effective context engineering for AI agents. https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents
[2] KubeBlocks. (2026). Introduction to KubeBlocks. https://kubeblocks.io/docs/preview/user_docs/overview/introduction
[3] ApeCloud. (2026). KubeBlocks Agent Skills Repository. https://github.com/apecloud/kubeblocks-skills
[4] Termo. (2026). KubeBlocks Skill on Termo. https://termo.ai/skills/kubeblocks-skills