$ dottxt generate \
--model Qwen/Qwen3.5-27B \
--prompt "Is this output valid?" \
--schema '{"valid": "boolean"}'
{"valid": true}NoBadOutputs
Reliable Agent Infrastructure
"Structured generation is the future of LLMs.”
Julien Chaumond, Cofounder & CTO, HuggingFace
Reliability, from tokens to agents
Production AI needs more than models that usually work. It needs contracts that always hold. From the tokens a model emits, to the agents that compose them, to the specifications those agents fulfill.
This is what we build.
Structured outputs. Per-call reliability. Your LLM's output matches your schema, every time. No retries, no validation loops, no defensive parsing. JSON Schema, regular expressions, context-free grammars; whatever shape your system needs, the model produces it by construction.
api.dottxt.ai platform
The fastest way to put .txt's structured generation into production. Access our latest constrained-decoding technology on a pay-per-token basis, powered by the newest open-source models. Guaranteed-valid outputs behind an API you can call today.
For teams that self-host
Drop-in replacements for the inference servers your team already runs: vLLM, SGLang or TensorRT-LLM. Keep your existing stack and gain reliable JSON, grammar-constrained, and function-calling output without the performance penalties of post-hoc validation. Built for teams that self-host.
For inference providers
Composable libraries that bring .txt's structured generation to any inference stack. dotjson enforces JSON Schema, dotgrammar handles arbitrary context-free grammars, and dotlambda powers reliable function calling. They are designed to integrate cleanly into existing inference pipelines, enabling providers to offer best-in-class structured outputs to their users.
We make AI behave like software
New rules for AI
Every powerful system begins the same way: with rules. Rules as a language, a protocol, a way for parts to fit together into something greater. The history of software is really a history of this idea: from UNIX pipes to API contracts to typed programming languages, composability has always been the path to scale.
But large language models broke that pattern. They’re powerful, but unpredictable. Flexible, but fragile. And without contracts, their outputs don’t compose, they just accumulate noise. Despite their tremendous capabilities, AI systems built on LLMs remain unrealized potential.
That’s why we built .txt: to make AI composable.
We believe that safety and reliability are the foundation. That contracts are what let systems scale. That AI should behave like software.
So we build reliability at every layer of the stack: from the tokens a model emits, to the agents that act on them.
AI is the new software. We make sure it acts like it.


