The fastest LLM inference on the market.
Groq's LPUs deliver the fastest LLM inference available today — typically 5–10x faster than running the same open-weights model on GPUs. Burki uses Groq for time-sensitive voice agents where token latency drives the user-perceived response time.
A voice AI assistant is only useful when it can connect to the rest of your stack. The Groq integration helps Burki fit into the systems your team already uses for calling, transcription, reasoning, speech, routing, or customer data. That means the assistant can move from a demo into a production workflow without forcing you to replace your existing tools.
Burki keeps the integration flexible. You can run one assistant with Groq, pair it with other providers for a full voice pipeline, and change providers later if your cost, latency, coverage, or quality requirements change. This is especially useful for teams that need different settings per assistant, region, campaign, or customer segment.
The result is a voice agent that is easier to operate: fewer hard-coded assumptions, clearer provider boundaries, and a pricing model that separates Burki's platform fee from the third-party services you already trust.
Choose Groq when you want open-weights LLM behavior with closed-AI latency, or when you're using a model the closed providers don't ship. Choose OpenAI / Anthropic when you need the absolute strongest reasoning quality regardless of latency.
# .env
GROQ_API_KEY=gsk_...
# Per-assistant config:
# LLM -> Provider: Groq
# Model: llama-3.3-70b-versatile # or llama-3.1-8b-instant for speedFor full setup, see the docs.
Burki: $0.03/min platform fee. Groq: per-token pricing, e.g. ~$0.59/M input + $0.79/M output for Llama 3.3 70B. Often 3–5x cheaper per token than OpenAI for comparable open-weight models.