The actual break-even point for running LLMs yourself
Cloud APIs are cheap at small volume and expensive at large volume. Here's the rough math on where self-hosting starts to pay off, what it actually costs, and the common mistakes.
A question we get on almost every engagement: “Should we self-host our models instead of paying OpenAI/Anthropic/Google?”
The answer depends on usage, data sensitivity, and available engineering time. Here’s the rough math.
Cloud API pricing in 2026
As of early 2026, typical per-million-token prices for common models:
- GPT-4o-class models: $2–$5 for input, $8–$15 for output.
- Claude Sonnet: ~$3 input / $15 output.
- Claude Opus / GPT-4 Turbo: more.
- Gemini Flash and smaller open models on managed APIs: around $0.50–$2.
At low volume, this is almost always cheaper than self-hosting. A typical early-stage SaaS product processing a few million tokens a month lands under $100 on any of these.
Self-hosting cost, done honestly
A reasonable 7B-model deployment on a single-GPU cloud instance costs roughly:
- GPU instance (L4, A10, or equivalent, 24/7): $400–$900/month.
- Storage, network egress, and base compute: $50–$150/month.
- Engineering to set up and operate: if it’s run by us on a retainer, several hundred dollars a month. If it’s an FTE on your team, much more.
So the floor for a self-hosted deployment is roughly $500–$1,500/month all-in, plus upfront engineering.
The break-even point
Self-hosting becomes cheaper than cloud APIs roughly when your monthly API spend exceeds the self-hosted floor for a comparable workload. In practice:
- Under 10M tokens/month: Stay on cloud APIs. Self-hosting is more expensive and more work.
- 10M–40M tokens/month: It depends. If your workload is steady (roughly the same volume every hour), self-hosting starts to look interesting. If it’s bursty, cloud APIs remain cheaper per useful hour.
- Over 40M tokens/month of steady traffic: Self-hosting usually wins on cost, often by a lot.
We’ve seen a client move from roughly $4,000/month on cloud APIs to $700/month on a single-GPU self-hosted deployment for a narrow workload. That’s meaningful, but they only got there after validating the use case on an API for a few months first.
Cost isn’t the only axis
Two other factors matter:
Latency. A local model co-located with your application responds in 50–300ms. A cloud API typically adds 100–400ms of network and queuing time on top of the raw inference. For real-time UIs (chat interfaces, interactive tools), this is noticeable to users. For batch work, it’s irrelevant.
Data control. If your data can’t leave your infrastructure — regulated industries, specific client contracts — cloud APIs are off the table and the cost comparison is moot. In that case, self-hosting is the only answer and the question is just how to do it well.
Model quality has narrowed
Two years ago, the quality gap between frontier cloud models and open-weight models was large. In 2026, for most narrow business tasks (classification, extraction, summarization, routine Q&A), a well-chosen 7B or 13B open-weight model performs comparably to a frontier cloud model. Users often cannot tell the difference in blind tests.
For hard reasoning, long multi-step tasks, or anything where the frontier lab’s specific training matters (Claude for writing, GPT for code, Gemini for multimodal), cloud models still have an edge. Know which bucket your task is in before making the decision.
The hybrid pattern
Most of our clients end up on a hybrid:
- The 80–90% of requests that are routine (classification, support replies, summarization) go to a self-hosted small model.
- The 10–20% of requests that are complex or user-facing (a user’s open-ended question, a tricky edge case) fall back to a cloud API.
The hybrid almost always beats either pure approach. The small model handles volume; the large model handles the long tail. You pay for each only where it pulls its weight.
Honest costs of self-hosting
A list of things people underestimate:
- Idle cost. A GPU rented 24/7 costs the same whether it’s at 10% or 90% utilization.
- Engineering time. Model updates, inference-server upgrades, queue tuning, monitoring — this is real work. Budget several hours a month at a minimum.
- Eval and regression. When you update the model, something will get worse. You need a way to notice.
- Failure modes you don’t get on cloud APIs. GPU hardware fails. Kubernetes pods die. You own the on-call pager.
If your team can’t take those on, either hire a partner to run it or stay on the cloud API. Both are defensible choices.
Practical checklist
Before self-hosting, have real answers to:
- What are your monthly token volumes, with a month of actual measurement (not a guess)?
- What’s the latency requirement for your most important request path?
- What are the data controls — is anything here unable to go to a cloud processor?
- Who owns this after it ships? Name them.
- Is there a fallback plan if the self-hosted path breaks?
If you can answer all five, self-hosting is a reasonable conversation. If you can’t, stay on the API a little longer and revisit when you can.
If you’d like a second opinion on the math for your specific workload, we’re happy to look at it.