QUANTIZATION News - Blockchain.News

DEEPSEEK

Enhancing AI Model Efficiency with Quantization Aware Training and Distillation
deepseek

Enhancing AI Model Efficiency with Quantization Aware Training and Distillation

Explore how Quantization Aware Training (QAT) and Quantization Aware Distillation (QAD) optimize AI models for low-precision environments, enhancing accuracy and inference performance.

Enhancing Large Language Models: NVIDIA's Post-Training Quantization Techniques
deepseek

Enhancing Large Language Models: NVIDIA's Post-Training Quantization Techniques

NVIDIA's post-training quantization (PTQ) advances performance and efficiency in AI models, leveraging formats like NVFP4 for optimized inference without retraining, according to NVIDIA.

Nexa AI Enhances DeepSeek R1 Distill Performance with NexaQuant on AMD Platforms
deepseek

Nexa AI Enhances DeepSeek R1 Distill Performance with NexaQuant on AMD Platforms

Nexa AI introduces NexaQuant technology for DeepSeek R1 Distills, optimizing performance on AMD platforms with improved inference capabilities and reduced memory footprint.

Trending topics