FlashAttention-4 Hits 1,605 TFLOPS on NVIDIA Blackwell GPUs

Alvin Lang   Jan 23, 2026 07:03  UTC 23:03

0 Min Read

NVIDIA has released FlashAttention-4, the latest optimization for transformer neural networks that squeezes 1,605 TFLOPS out of its Blackwell architecture—capturing 71% of the hardware's theoretical maximum performance.

The announcement matters for anyone watching AI infrastructure investments. As large language models push toward longer context windows, the attention mechanism's quadratic memory complexity becomes a brutal bottleneck. FlashAttention-4 attacks this problem directly, and the benchmark numbers suggest meaningful gains for production AI workloads.

What the Numbers Show

On the B200 GPU, FA4 delivers a 3.6x speedup over FlashAttention-2 during forward passes at 32,768 sequence length. Backward pass performance hits 3.15x faster than FA2 under the same conditions. Against existing frameworks, FA4 posts 1.3x improvement over cuDNN and 2.4x over Triton Inference Server implementations.

The memory efficiency gains are equally significant. Standard attention scales at O(N²) with sequence length—meaning doubling your context window quadruples memory requirements. FA4 brings this down to O(N) through tiling and incremental softmax normalization. NVIDIA claims 20x lower memory usage compared to PyTorch baselines.

Hardware-Software Co-Design

FA4 was built specifically for Blackwell's quirks. The architecture presents an asymmetric scaling problem: compute power roughly doubles while memory bandwidth doesn't keep pace. Traditional approaches leave tensor cores sitting idle while waiting for data.

The solution leverages Blackwell's dedicated Tensor Memory (TMEM)—256 KB of on-chip memory per streaming multiprocessor. By storing intermediate calculations directly in TMEM instead of shared memory, FA4 sidesteps the bandwidth bottleneck that would otherwise throttle the faster compute units.

Larger tile sizes (up to 128×128) and deeper pipelines keep the hardware busy. The backward pass—typically the slower half of training—benefits from bypassing register accumulation entirely.

Production Integration

Major inference frameworks including SGLang and vLLM already support FA4 prefill operations. NVIDIA has incorporated these techniques into cuDNN 9.14, making the optimizations accessible to developers without custom kernel work.

For AI companies burning through compute budgets, the efficiency gains translate directly to cost savings. A 3x+ speedup on training passes means either faster iteration cycles or the ability to train larger models within existing infrastructure constraints.

The broader trend here: as transformer models grow, algorithmic efficiency at the kernel level becomes as important as raw hardware capability. FlashAttention-4 represents the current frontier of that optimization work.



Read More