Grouped Query Attention

Also known as: GQA, Grouped-Query Attention, Group Query Attention

Grouped Query Attention
An attention mechanism variant that groups multiple query heads to share key-value heads, balancing the output quality of multi-head attention with the inference speed of multi-query attention. Adopted by most frontier language models.

Grouped Query Attention (GQA) is an attention mechanism that groups multiple query heads to share key-value pairs, delivering near-full-attention quality at a fraction of the memory cost during inference.

What It Is

If you’ve used a language model — asked Claude a question, generated code with Cursor, or run Llama on your laptop — the model ran an attention mechanism on every token it produced. Attention is how the model decides which parts of your input matter most for each word it generates. The original design, multi-head attention (MHA), gives every query its own dedicated set of key-value heads. That produces high-quality output, but it eats memory fast, especially during long conversations or when processing large documents. This is one of the core tradeoffs when comparing attention variants like self-attention, cross-attention, and causal masking: quality and efficiency pull in opposite directions.

Researchers tried an aggressive fix called multi-query attention (MQA), which collapsed all key-value heads into a single shared pair. Inference speed jumped, but output quality dropped noticeably. GQA, introduced by Ainslie et al. at EMNLP 2023, found the middle ground.

Think of it like a newsroom. In multi-head attention, every journalist has a personal research team — thorough but expensive. In multi-query attention, the entire newsroom shares one research assistant — fast but stretched thin. GQA creates small desks of three or four journalists who share a dedicated research team. Each desk has enough support for solid work, and the newsroom runs at a fraction of the cost.

Technically, GQA divides query heads into groups, and each group shares one set of key-value projections. If a model has sixteen query heads and four key-value heads, you get four groups of four queries sharing KV pairs. It’s a spectrum: one KV group gives you MQA; one KV head per query returns to standard MHA. According to Ainslie et al., the result is near-MHA accuracy with near-MQA inference speed — which is why GQA has become the default in most major model families. According to IBM Research, models including Llama 2 and 3, Mistral 7B, Gemma 3, Qwen3, and IBM Granite 3.0 all use GQA.

How It’s Used in Practice

You don’t configure GQA yourself — it’s baked into the model architecture. But it directly affects what you experience as a user. When a coding assistant handles a long file, when a chatbot maintains coherent context across a lengthy conversation, or when you run an open-source model locally without maxing out your GPU, GQA is one reason that works. It reduces the memory footprint of the key-value cache that the model builds during text generation, which means longer contexts fit in less hardware.

For teams evaluating which open-source model to deploy, the attention configuration matters. GQA models need less GPU memory at inference time than equivalent MHA models with the same parameter count. That can mean the difference between fitting on a single consumer GPU or needing a multi-GPU setup.

Pro Tip: When comparing open-source models for local deployment, check the model card for the number of KV heads relative to query heads. Fewer KV heads (a sign of GQA) usually means lower memory requirements during generation — useful if you’re working with limited hardware or long input sequences.

When to Use / When Not

ScenarioUseAvoid
Deploying a model where inference cost and latency matter
Running inference on memory-constrained hardware (single GPU)
Converting an existing MHA model to faster inference without full retraining
Training a small research model where memory is not a concern
Tasks where maximum attention fidelity outweighs any speed gain

Common Misconception

Myth: GQA always produces lower quality outputs than full multi-head attention because it shares key-value heads. Reality: According to Ainslie et al., GQA achieves near-MHA accuracy while running at near-MQA speed. The quality gap is small enough that nearly every frontier model family has adopted GQA as the default — a decision that would not survive if the quality loss were significant.

One Sentence to Remember

GQA lets query heads share key-value resources in small groups rather than each getting their own or all sharing one, striking the balance between quality and speed that made modern large language models practical to run at scale.

FAQ

Q: What is the difference between GQA and multi-query attention? A: Multi-query attention uses a single key-value head for all queries, sacrificing quality for speed. GQA uses multiple KV heads shared across groups of queries, recovering most of the quality while staying nearly as fast.

Q: Which models use Grouped Query Attention? A: According to IBM Research, GQA is used in Llama 2 and 3, Mistral 7B, Gemma 3, Qwen3, and IBM Granite 3.0. It has become the standard for most frontier language models.

Q: Can an existing model be converted to use GQA? A: Yes. According to Ainslie et al., models trained with standard multi-head attention can be “uptrained” to use GQA without a full retraining run, making migration practical for existing checkpoints.

Sources

Expert Takes

GQA is a clean interpolation between two known endpoints. Multi-head attention dedicates key-value capacity to each query head — precise but memory-intensive during generation. Multi-query attention collapses everything into a single shared pair — fast but lossy. GQA partitions query heads into groups sharing key-value projections. This creates a tunable axis: more groups yield higher fidelity, fewer groups yield faster inference. Most frontier architectures have settled on a point along this axis as their default.

If you’re choosing between models for a context-heavy workflow — code completion across large files, document analysis, or multi-turn conversations — GQA is one reason newer models handle long inputs without stalling. It shrinks the key-value cache that grows with every generated token, which directly affects how much context fits within your hardware budget. You won’t configure it, but understanding it explains why some models run faster on the same machine.

GQA addressed a business problem wrapped inside a research paper. Running large models at scale costs real money, and inference is where the bills accumulate. By cutting memory overhead without meaningfully hurting quality, GQA made deploying capable models commercially viable for more organizations. Every major model family adopted it within two years of publication. That adoption velocity tells you everything about the underlying economics.

The rapid convergence of model families around GQA raises a quiet question about architectural diversity. When every major team adopts the same attention optimization, the field gains efficiency but may lose the variance that drives unexpected discovery. There could be quality tradeoffs in specific tasks that aggregate benchmarks fail to surface. Efficiency is a worthy goal — but we should remain honest about what gets traded away when an entire discipline optimizes for a single design pattern.