LiquidAI x .txt: Function calling on the edge

We've been working with Liquid AI to combine their LFM2-350M model with our dotgrammar library — making AI function calling on edge devices fast, efficient, and rock-solid reliable.

Edge deployments (smart home hubs, industrial IoT sensors, mobile assistants, etc) face brutal constraints: limited processing power, sub-300ms latency requirements, and no room for error when function calls must work the first time correctly.

That's exactly the problem we set out to solve together.

Our dotgrammar product uses context-free grammars (CFGs) to constrain model outputs during generation. It guarantees syntactically valid outputs every time and adds zero runtime overhead to inference.

Paired with Liquid AI's LFM2-350M, which runs in under 1GB of RAM and delivers sub-100ms inference on common edge hardware, the whole stack is built for the edge from the ground up.

We also replaced bloated JSON function calls with a Pythonic format. The difference? 37 tokens vs. 14 — a 2.6x reduction that directly speeds up generation without sacrificing expressivity.

Read the full technical breakdown on the Liquid AI blog and try it yourself on Hugging Face.