TRAINING News - Blockchain.News

DEEPSEEK

NVIDIA Blackwell Leads MLPerf Training v5.1 with Record-Breaking Performance
deepseek

NVIDIA Blackwell Leads MLPerf Training v5.1 with Record-Breaking Performance

NVIDIA's Blackwell architecture achieves top performance across all MLPerf Training v5.1 benchmarks, highlighting advancements in AI training efficiency and precision.

NVIDIA Dominates MLPerf Training v5.1 with Blackwell Ultra GPUs
deepseek

NVIDIA Dominates MLPerf Training v5.1 with Blackwell Ultra GPUs

NVIDIA swept the MLPerf Training v5.1 benchmarks, showcasing superior AI training performance with its Blackwell Ultra GPU architecture across multiple AI model categories.

Boosting Model Training with CUDA-X: An In-Depth Look at GPU Acceleration
deepseek

Boosting Model Training with CUDA-X: An In-Depth Look at GPU Acceleration

Explore how CUDA-X Data Science accelerates model training using GPU-optimized libraries, enhancing performance and efficiency in manufacturing data science.

Enhancing AI Training: NVIDIA's NCCL Advances Cross-Data Center Communication
deepseek

Enhancing AI Training: NVIDIA's NCCL Advances Cross-Data Center Communication

NVIDIA's NCCL introduces enhanced cross-data center communication features, optimizing AI training by leveraging network topology awareness and supporting multiple data centers with minimal modifications.

Effective FP8 Training: Exploring Per-Tensor and Per-Block Scaling Strategies
deepseek

Effective FP8 Training: Exploring Per-Tensor and Per-Block Scaling Strategies

Explore NVIDIA's FP8 training strategies, focusing on per-tensor and per-block scaling methods, for enhanced numerical stability and accuracy in low-precision AI model training.

NVIDIA and AWS Join Forces to Enhance AI Training Scalability
deepseek

NVIDIA and AWS Join Forces to Enhance AI Training Scalability

NVIDIA Run:ai and AWS SageMaker HyperPod integrate to streamline AI training, offering enhanced scalability and resource management across hybrid cloud environments.

Floating-Point 8: Revolutionizing AI Training with Lower Precision
deepseek

Floating-Point 8: Revolutionizing AI Training with Lower Precision

Explore how Floating-Point 8 (FP8) is set to enhance AI training efficiency by balancing computational speed and accuracy, as detailed by NVIDIA's insights.

NVIDIA Expands AI Training with Multilingual Workshop at GTC Paris
deepseek

NVIDIA Expands AI Training with Multilingual Workshop at GTC Paris

NVIDIA Deep Learning Institute introduces a multilingual AI workshop at GTC Paris, addressing challenges in language models and enhancing domain-specific capabilities.

Open-Source AI: Mixture-of-Agents Alignment Revolutionizes Post-Training for LLMs
deepseek

Open-Source AI: Mixture-of-Agents Alignment Revolutionizes Post-Training for LLMs

Mixture-of-Agents Alignment (MoAA) is a groundbreaking post-training method that enhances large language models by leveraging open-source collective intelligence, as detailed in a new ICML 2025 paper.

Anyscale Introduces Comprehensive Ray Training Programs
deepseek

Anyscale Introduces Comprehensive Ray Training Programs

Anyscale launches new training options for Ray, including free eLearning and instructor-led courses, catering to AI/ML engineers seeking to scale AI applications effectively.

Trending topics