CRYPTOCURRENCY
NVIDIA's TensorRT-LLM MultiShot Enhances AllReduce Performance with NVSwitch
NVIDIA introduces TensorRT-LLM MultiShot to improve multi-GPU communication efficiency, achieving up to 3x faster AllReduce operations by leveraging NVSwitch technology.
Enhancing Large Language Models with NVIDIA Triton and TensorRT-LLM on Kubernetes
Explore NVIDIA's methodology for optimizing large language models using Triton and TensorRT-LLM, while deploying and scaling these models efficiently in a Kubernetes environment.
NVIDIA Enhances Llama 3.1 405B Performance with TensorRT Model Optimizer
NVIDIA's TensorRT Model Optimizer significantly boosts performance of Meta's Llama 3.1 405B large language model on H200 GPUs.
NVIDIA Enhances TensorRT Model Optimizer v0.15 with Improved Inference Performance
NVIDIA releases TensorRT Model Optimizer v0.15, offering enhanced inference performance through new features like cache diffusion and expanded AI model support.
NVIDIA TensorRT-LLM Boosts Hebrew LLM Performance
NVIDIA's TensorRT-LLM and Triton Inference Server optimize performance for Hebrew large language models, overcoming unique linguistic challenges.
NVIDIA H100 GPUs and TensorRT-LLM Achieve Breakthrough Performance for Mixtral 8x7B
NVIDIA's H100 Tensor Core GPUs and TensorRT-LLM software demonstrate record-breaking performance for the Mixtral 8x7B model, leveraging FP8 precision.
Enhanced AI Performance with NVIDIA TensorRT 10.0's Weight-Stripped Engines
NVIDIA introduces TensorRT 10.0 with weight-stripped engines, offering >95% compression for AI apps.
StreamingLLM Breakthrough: Handling Over 4 Million Tokens with 22.2x Inference Speedup
SwiftInfer, leveraging StreamingLLM's groundbreaking technology, significantly enhances large language model inference, enabling efficient handling of over 4 million tokens in multi-round conversations with a 22.2x speedup.