Together AI's CDLM Achieves 14.5x Faster AI Inference Without Quality Loss
Together AI has released a post-training technique called Consistency Diffusion Language Models (CDLM) that cuts inference latency by up to 14.5x on coding benchmarks while preserving output quality. The breakthrough addresses two fundamental inefficiencies that have kept diffusion-based language models from competing with traditional autoregressive architectures in production environments.
Standard diffusion language models generate text by iteratively refining a masked sequence over multiple steps—a process that enables parallel token generation but creates punishing computational overhead. Full bidirectional attention requires recomputing attention across the entire context at every denoising step, and reducing step counts typically destroys output quality.
The Technical Fix
CDLM attacks both problems through a three-part training objective. The system collects decoding trajectories from a teacher model, then trains a student model using a block-wise causal attention mask. This architectural shift enables exact KV caching for completed blocks—something impossible with standard bidirectional attention.
The consistency loss component enforces temporal stability within blocks, teaching the model to finalize multiple tokens reliably rather than degrading when step counts drop. A distillation loss anchors the student's predictions to the teacher's distributions, while an auxiliary masked-denoising objective preserves general reasoning capabilities.
Benchmark Performance
On GSM8K chain-of-thought reasoning, CDLM delivered 11.2x latency improvement. MBPP coding tasks saw the peak 14.5x reduction. Step counts dropped 4.1x to 7.7x across benchmarks with minimal accuracy degradation.
The contrast with naive step reduction is stark. Simply truncating refinement steps on baseline diffusion models causes marked accuracy collapse. CDLM maintains quality at equivalent step budgets while achieving roughly half the latency through caching—demonstrating that stable multi-token refinement requires explicit training rather than inference-time shortcuts.
Why Block-Wise Architecture Matters
Together AI's hardware analysis reveals why CDLM occupies a computational sweet spot. Autoregressive decoding is memory-bound at small batch sizes, with arithmetic intensity near 1 at batch size 1. Vanilla diffusion models swing to the opposite extreme—compute-bound even at batch size 1 because full bidirectional attention processes entire sequences each step.
Block-wise diffusion sits between these extremes. Higher arithmetic intensity than autoregressive models due to intra-block parallelism, but lower than vanilla diffusion—a balanced operating point for the small-batch inference scenarios common in production deployments.
Market Context
The release follows Inception Labs' February 2025 announcement of diffusion-based language models promising 10x faster generation than traditional LLMs. Google's Gemini Diffusion has since demonstrated commercial-grade parity with autoregressive architectures, signaling growing industry confidence in the approach.
CDLM's post-training recipe can theoretically be applied to any block-diffusion model, suggesting the technique's benefits should compound as stronger base models emerge. Together AI points to collecting trajectories from larger teacher models and training mid-scale students as a promising scaling direction—a hint at where inference optimization research may head next.
Read More
GitHub Copilot Adds Claude Opus 4.6 Across All Major IDEs
Feb 19, 2026 0 Min Read
GitHub Copilot Coding Agent Now Supports Windows Development Environments
Feb 19, 2026 0 Min Read
GitHub Upgrades Secret Scanning with Enterprise-Wide Metadata Controls
Feb 19, 2026 0 Min Read
TypeScript Overtakes Python on GitHub as AI Tools Reshape Developer Choices
Feb 19, 2026 0 Min Read
NVIDIA MIG Tech Delivers 2.25x Speedups for Power-Constrained AI Workloads
Feb 19, 2026 0 Min Read