← All integrations
LLM

Burki + Groq

The fastest LLM inference on the market.

Groq's LPUs deliver the fastest LLM inference available today — typically 5–10x faster than running the same open-weights model on GPUs. Burki uses Groq for time-sensitive voice agents where token latency drives the user-perceived response time.

Why connect Groq to Burki?

A voice AI assistant is only useful when it can connect to the rest of your stack. The Groq integration helps Burki fit into the systems your team already uses for calling, transcription, reasoning, speech, routing, or customer data. That means the assistant can move from a demo into a production workflow without forcing you to replace your existing tools.

Burki keeps the integration flexible. You can run one assistant with Groq, pair it with other providers for a full voice pipeline, and change providers later if your cost, latency, coverage, or quality requirements change. This is especially useful for teams that need different settings per assistant, region, campaign, or customer segment.

The result is a voice agent that is easier to operate: fewer hard-coded assumptions, clearer provider boundaries, and a pricing model that separates Burki's platform fee from the third-party services you already trust.

What you get with Burki + Groq

  • Sub-200ms time-to-first-token on Llama 3.x, Mixtral, and other open models
  • Drop-in replacement for OpenAI in Burki's standard pipeline
  • Per-token pricing comparable to OpenAI but radically faster
  • Tool calling support for function execution
  • Best paired with Cartesia or ElevenLabs Turbo for end-to-end sub-600ms response

When to choose Groq

Choose Groq when you want open-weights LLM behavior with closed-AI latency, or when you're using a model the closed providers don't ship. Choose OpenAI / Anthropic when you need the absolute strongest reasoning quality regardless of latency.

How to configure

# .env
GROQ_API_KEY=gsk_...

# Per-assistant config:
# LLM -> Provider: Groq
# Model: llama-3.3-70b-versatile  # or llama-3.1-8b-instant for speed

For full setup, see the docs.

Pricing

Burki: $0.03/min platform fee. Groq: per-token pricing, e.g. ~$0.59/M input + $0.79/M output for Llama 3.3 70B. Often 3–5x cheaper per token than OpenAI for comparable open-weight models.

Ship a voice agent on Groq today

200 free minutes. No credit card. Five-minute setup.

Start Free