Blog

What is agent-native cloud?

The infrastructure shift no one is talking about.

A new category is forming

In the past 12 months, E2B raised $21M, Daytona raised $24M, Modal raised $80M, and Railway raised $100M. The money is flowing into AI infrastructure, but most of it is going to platforms that still assume a human operator.

Meanwhile, a different kind of infrastructure is emerging — built not for humans who use dashboards, but for AI agents that call APIs. This is agent-native cloud.

Defining agent-native

A cloud platform is agent-native when an AI agent can complete the entire customer journey — from discovery to provisioned infrastructure — without human intervention. That means:

  • Machine-readable discovery. The product can be found and understood by an agent via OpenAPI specs, llms.txt, structured docs, or installable skills — not just marketing pages designed for human browsers.
  • Self-serve signup via API. No email verification, no CAPTCHA, no dashboard. The agent sends a POST request and gets back credentials.
  • Quota-based trust. Instead of putting heavy gates at signup (identity verification, payment method), trust is enforced through resource limits, expiry, and abuse monitoring. Easy to start, hard to exploit.
  • Zero-GUI provisioning. Every action — create, configure, start, stop, delete — is available through a well-documented API. No console clicks required.
  • Human-in-the-loop only for billing. The agent operates autonomously within sandbox limits. It asks the human only when real money needs to be spent.

How agent-native differs from "API-first"

Every major cloud provider has an API. AWS has APIs for everything. That doesn't make them agent-native.

The difference is who the API is designed for. An API-first platform builds APIs for human developers who write integration code. An agent-native platform builds APIs for AI agents that discover and consume them autonomously.

The practical differences are significant:

DimensionAPI-First (Traditional)Agent-Native
Account creationHuman fills out a formAgent calls a signup endpoint
AuthenticationHuman generates keys in a dashboardKeys returned at signup, no dashboard needed
DiscoveryHuman reads docs, writes codeAgent reads OpenAPI spec, llms.txt, or skill
Trust modelIdentity verification at signupResource quotas and abuse monitoring post-signup
Error messagesDesigned for human readabilityStructured JSON with machine-actionable codes
BillingCredit card required upfrontFree sandbox, human approves upgrades only when needed

Why this matters now

AI agents are getting better at using tools, and the tools they use most are HTTP APIs. As agents become more capable and more autonomous, they'll increasingly need infrastructure they can provision themselves.

Today, when an agent needs compute, it hits a wall:

  • AWS requires a human account with identity verification
  • GCP requires OAuth consent and billing setup
  • Fly.io requires a CLI install and human auth
  • Even E2B requires a human to create the initial account

Every cloud provider assumes the customer is a human. Agent-native cloud removes that assumption.

The trust inversion

Traditional cloud puts the trust boundary at signup: verify the person's identity, attach a payment method, then grant broad access. This makes sense when humans are the operators — you trust the person, then let them do what they want.

Agent-native cloud inverts this. Signup is trivial — anyone (or any agent) can create an account instantly. The trust boundary moves to the resource layer: strict quotas, short expiry, network restrictions, and aggressive abuse monitoring. You don't trust the entity; you trust the sandbox.

This is safer than it sounds. A sandbox with hard limits and automatic cleanup is arguably more secure than a verified account with broad permissions. The blast radius of abuse is small and time-bounded by design.

What agent-native infrastructure looks like

At Agent Cloud, we built the simplest version of this idea: an API where an AI agent can sign up, get a sandbox key, provision a Linux VM, and manage its lifecycle — all without a human touching anything.

The flow is four API calls:

  1. POST /v1/agent/signup — create account, get API key
  2. GET /v1/usage — check sandbox limits
  3. POST /v1/instances — provision a micro VM
  4. GET /v1/instances/{id} — poll until ready

The agent discovers this flow through the OpenAPI spec, llms.txt, or an installable skill. No human reads docs and writes integration code. The agent reads the spec and acts.

The implications

If agent-native cloud works — if agents can reliably self-provision infrastructure — several things follow:

  • Agents become infrastructure customers. Cloud providers will need to optimize for machine-initiated signups and API-driven everything, not just human-friendly dashboards.
  • Discovery changes. Products need to be findable by agents, not just by humans Googling. OpenAPI specs, llms.txt, and structured data become acquisition channels.
  • Trust models change. Identity-at-signup gives way to quota-based containment. The industry will need new patterns for managing non-human customers safely.
  • Billing gets a human-in-the-loop. Agents operate freely within free tiers. Spending decisions route to humans. This is a natural division of responsibility.

We're early

Agent-native cloud is not a mature category. It's a set of design principles being discovered in real time by a handful of companies. The vocabulary isn't settled. The patterns aren't established. The best practices don't exist yet.

That's exactly why it's worth paying attention. The companies that define this category now will shape how AI agents consume infrastructure for years to come.

Try Agent Cloud — or read about why we built it.