inference Flash News List | Blockchain.News
Flash News List

List of Flash News about inference

Time Details
2026-01-26
16:31
Microsoft MSFT Maia 200 AI Accelerator Now Online in Azure: CEO Says 30% Better Performance per Dollar for Inference

According to @StockMKTNewz, Microsoft CEO Satya Nadella said the new Maia 200 AI accelerator is now online in Azure and is designed for industry-leading inference efficiency, delivering 30% better performance per dollar than current systems, as cited by @StockMKTNewz. According to @StockMKTNewz quoting Nadella, the emphasis on inference efficiency and performance per dollar highlights cost-focused AI infrastructure progress within Azure.

Source
2026-01-26
16:01
Azure Maia 200 AI Accelerator Now Live: 30% Better Performance per Dollar, Over 10 PFLOPS FP4 and 216GB HBM3e

According to @satyanadella, Microsoft’s Maia 200 AI accelerator is now online in Azure, designed for industry-leading inference efficiency with a stated 30% better performance per dollar than current systems, source: @satyanadella. The accelerator delivers over 10 PFLOPS FP4, about 5 PFLOPS FP8, and 216GB HBM3e with 7 TB/s memory bandwidth for high-throughput inference on Azure, source: @satyanadella. The release targets cost-efficient inference at scale within Azure’s AI infrastructure, source: @satyanadella.

Source
2025-04-09
17:17
Google Unveils 7th-Gen TPU 'Ironwood' with Significant Performance Enhancements

According to @sundarpichai, Google has announced the 7th-generation TPU, named 'Ironwood', at the #GoogleCloudNext event in Las Vegas. This new TPU is designed specifically for inference tasks, boasting a 3,600x performance increase and a 29x efficiency boost compared to the first Cloud TPU. The release is expected later this year, which could potentially impact AI-related cryptocurrency projects relying on cloud computing efficiency.

Source
2025-02-18
07:04
DeepSeek Introduces NSA: Optimizing Sparse Attention for Enhanced Training

According to DeepSeek, the NSA (Natively Trainable Sparse Attention) mechanism is designed to improve ultra-fast long-context training and inference capabilities through dynamic hierarchical sparse strategy, coarse-grained token compression, and fine-grained token selection, potentially enhancing trading algorithms by increasing processing efficiency and reducing computational load.

Source
2025-01-27
00:33
Paolo Ardoino Discusses Future of AI Model Training and Cost Efficiency

According to Paolo Ardoino, the future of AI model training will not rely on the brute force of 1 million GPUs. Instead, the development of better models will significantly reduce training costs, emphasizing that access to data will remain crucial. Ardoino suggests that inference will move to local or edge computing, making the current expenditure on brute force methods seem inefficient in hindsight.

Source