List of AI News about ASML
| Time | Details |
|---|---|
|
2026-03-23 20:13 |
Nvidia CEO Jensen Huang Explores Orbital Data Centers: 24/7 Solar, Space Radiators, and Radiation-Hardened AI Infrastructure
According to Lex Fridman on X, Jensen Huang said Nvidia has engineers actively researching orbital data centers to leverage continuous solar power and dissipate heat via giant radiators in vacuum, addressing challenges like radiation, performance degradation, redundancy, and continuous testing, as reported in Fridman’s interview timestamps covering AI data centers in space. According to Sawyer Merritt’s post referencing the same interview, Huang emphasized there is no conduction or convection in space and heat must be evacuated by radiation, framing thermal management and radiation-hardening as primary engineering blockers for AI scale-out in orbit. |
|
2026-03-23 16:49 |
NVIDIA CEO Jensen Huang on AI Scaling Laws, Rack-Scale Systems, and Supply Chain: Key Takeaways and 2026 Business Impact Analysis
According to Lex Fridman on X, Jensen Huang detailed how NVIDIA applies extreme co-design at rack scale to optimize GPUs, networking, memory, and power for end-to-end AI systems, emphasizing that datacenter-as-a-computer is core to sustaining AI scaling laws (source: Lex Fridman on X). According to the interview, Huang cited supply chain coordination with TSMC and ASML as mission-critical for capacity, yield, and next-gen lithography, underscoring capital intensity and lead-time risk for AI infrastructure buyers (source: Lex Fridman on X). As reported by Lex Fridman, memory bandwidth and new interconnects are now primary bottlenecks, shifting optimization from pure FLOPS to memory-centric architectures and networking fabrics, with implications for model parallelism and inference cost (source: Lex Fridman on X). According to the conversation, power delivery and total cost of ownership drive rack-scale engineering, making energy efficiency per token and per training step a decisive business metric for hyperscalers and AI startups (source: Lex Fridman on X). As discussed in the interview, Huang framed NVIDIA’s moat as full-stack integration—silicon, systems, CUDA software, and libraries—positioned to serve emerging opportunities like long-context LLMs, multimodal models, and AI data centers potentially beyond Earth, while noting constraints in geography-sensitive supply chains including China and Taiwan (source: Lex Fridman on X). |
