Blog
Why we built a cloud platform for AI agents, not humans
The problem
Every major cloud provider — AWS, GCP, Azure, even developer-friendly platforms like Fly.io and Railway — assumes the same thing: a human will create an account, navigate a dashboard, configure IAM, and provision resources manually.
That made sense when humans were the only infrastructure operators. It doesn't make sense when AI agents are increasingly the ones doing the work.
If you ask Claude, ChatGPT, or any capable agent to "spin up a server for this task," it hits a wall immediately. It can't create an AWS account. It can't click through a console. It can't pass CAPTCHA. The entire onboarding flow is designed to keep machines out.
The insight
We asked a simple question: what would cloud infrastructure look like if agents were the primary customer, not humans?
The answer turned out to be surprisingly different:
- Signup is an API call. No email verification, no CAPTCHA, no manual review. An agent sends a POST request and gets back an API key in milliseconds.
- Trust moves to quotas, not gates. Instead of making signup hard, we make the sandbox small. One micro VM, 72 hour lifetime, strict networking restrictions. Easy to try, hard to exploit.
- Humans only appear at billing time. The agent does everything autonomously until the workload outgrows the sandbox. Then it asks the human operator for permission to upgrade via Stripe.
- Discovery is machine-readable. OpenAPI spec, llms.txt, installable skills, structured docs. Agents can find and learn the product without a human pointing them at it.
What the flow looks like
An agent — Claude Code, ChatGPT with tools, a custom LangChain agent, anything with HTTP access — calls a single endpoint:
curl -X POST https://api.asiagent.cloud/v1/agent/signup \
-H "content-type: application/json" \
-d '{
"agent_name": "claude-code",
"agent_type": "claude",
"terms_accepted": true
}'And gets back everything it needs:
{
"account_id": "acc_a1b2c3",
"project_id": "prj_d4e5f6",
"api_key": "asi_sandbox_...",
"tier": "sandbox",
"limits": {
"max_active_instances": 1,
"max_vcpu": 1,
"max_memory_gb": 1,
"expires_at": "2026-03-13T00:00:00Z"
}
}From there, the agent creates a project, provisions a micro Ubuntu VM, and starts working. The entire journey from zero to running server happens in one conversation, with no human intervention.
Why VMs and not containers or functions
We intentionally started with the simplest primitive: a small Linux VM. Not Kubernetes. Not edge functions. Not managed containers.
VMs are the broadest common denominator. An agent can SSH in, install whatever it needs, run whatever it wants. There's no runtime restriction, no cold start model to reason about, no packaging format to learn. It's just a Linux box.
This keeps the MVP narrow enough to ship fast and broad enough to validate real demand before adding complexity.
The abuse question
The obvious concern: if signup is frictionless, won't people abuse it?
Yes, some will try. Our approach isn't to prevent abuse at signup — it's to make abuse cheap to contain and fast to clean up:
- Hard resource caps on the sandbox tier
- No SMTP, restricted outbound ports
- IP and ASN throttling on signup
- 72 hour VM lifetime with automatic cleanup
- Aggressive monitoring for crypto mining, proxying, and relay behavior
- 7 day account expiry unless upgraded with a real payment method
The sandbox is designed to be disposable. Abuse happens, gets contained, and gets cleaned up automatically.
What we're building toward
Agent Cloud starts as a simple VM API for agents. But the thesis is bigger: as AI agents become more capable and more autonomous, they'll need infrastructure that treats them as first-class operators.
That means machine-readable everything. API-first signup. Quotas instead of gates. Human approval only where it actually matters (spending money). And discovery mechanisms that work for agents, not just humans browsing the web.
If you're building with agents and want to give them their own compute, start with the quickstart or check the pricing.