List of AI News about CUDA
| Time | Details |
|---|---|
|
2026-03-18 17:45 |
NVIDIA GTC 2015 Revisited: Karpathy Credits Jensen Huang’s Early Deep Learning Bet—A 2026 Analysis
According to Andrej Karpathy on X, NVIDIA CEO Jensen Huang forecasted at GTC 2015 that deep learning would be the next big thing, citing Karpathy’s PhD work on end to end image captioning that linked a ConvNet for image recognition with an autoregressive RNN language model as a key example. As reported by Karpathy, this prescient stance—delivered to an audience then dominated by gamers and HPC professionals—helped catalyze NVIDIA’s early platform investment in GPU accelerated deep learning, which later underpinned the company’s dominance across training and inference workloads. According to public GTC archives referenced by Karpathy’s post, the strategic alignment from 2015 set the stage for today’s foundation model era, enabling opportunities in multimodal systems, enterprise AI adoption, and accelerated computing stacks spanning CUDA, cuDNN, and TensorRT. |
|
2026-03-17 10:30 |
Nvidia GTC 2026: Latest AI Breakthroughs and Business Impact — Key Announcements and Analysis
According to The Rundown AI, Nvidia used GTC to unveil new AI platform updates and enterprise offerings that expand GPU computing for generative AI workloads, as reported by The Rundown AI citing its coverage page. According to The Rundown AI, the event recap highlights Nvidia’s push to accelerate training and inference efficiency for large language models and multimodal systems, with a focus on enterprise deployment and developer tooling, per The Rundown AI’s GTC post. As reported by The Rundown AI, the announcements emphasize opportunities for partners to build domain-specific copilots, optimize inference with model compression, and scale retrieval augmented generation on Nvidia’s ecosystem. |
|
2026-03-16 19:19 |
Nvidia CEO Forecasts $1 Trillion Revenue by 2027: Latest Analysis on AI Computing Platform Demand
According to Sawyer Merritt on X, Nvidia CEO Jensen Huang announced a target of at least $1 trillion in revenue by 2027 and said computing demand will exceed that, stating, “We are now a computing platform that runs all of AI.” According to Sawyer Merritt’s post, this signals Nvidia’s push beyond GPUs into a full-stack AI computing platform spanning data center GPUs, networking, software, and services. As reported by Sawyer Merritt, the guidance implies aggressive hyperscaler and enterprise AI infrastructure buildouts, creating opportunities for model training, inference acceleration, and AI-native applications on Nvidia’s platform. According to Sawyer Merritt, the statement underscores multi-year demand for systems like H100 and successors, networking like InfiniBand and Ethernet, and the CUDA software ecosystem, shaping 2026–2027 capex cycles for cloud, automotive, and edge AI. |
