DEEPSEEK
NVIDIA Brings CUDA Tile Programming to Julia with cuTile.jl Release
NVIDIA releases cuTile.jl, enabling Julia developers to write high-performance GPU kernels using tile-based programming with near-parity Python performance.
NVIDIA cuda.compute Brings C++ GPU Performance to Python Developers
NVIDIA's new cuda.compute library topped GPU MODE benchmarks, delivering CUDA C++ performance through pure Python with 2-4x speedups over custom kernels.
NVIDIA Integrates CUDA Tile Backend for OpenAI Triton GPU Programming
NVIDIA's new CUDA Tile IR backend for OpenAI Triton enables Python developers to access Tensor Core performance without CUDA expertise. Requires Blackwell GPUs.
NVIDIA CUDA 13.1 Drops CUB Boilerplate with New Single-Call API
NVIDIA simplifies GPU development with CUB single-call API in CUDA 13.1, eliminating repetitive two-phase memory allocation code without performance loss.
NVIDIA cuTile Python Guide Shows 90% cuBLAS Performance for Matrix Ops
NVIDIA releases detailed cuTile Python tutorial for Blackwell GPUs, demonstrating matrix multiplication achieving over 90% of cuBLAS performance with simplified code.
NVIDIA Enhances cuML Accessibility by Reducing CUDA Binary Size for PyPI Distribution
NVIDIA introduces pip-installable cuML wheels on PyPI, simplifying installation and broadening accessibility by reducing CUDA binary sizes.
NVIDIA Enhances Memory Safety with Compile-Time Instrumentation for Compute Sanitizer
NVIDIA's latest update to Compute Sanitizer introduces compile-time instrumentation to improve memory safety in CUDA C++ applications, reducing false negatives and enhancing bug detection.
NVIDIA's ComputeEval 2025.2 Challenges LLMs with Advanced CUDA Tasks
NVIDIA expands ComputeEval with 232 new CUDA challenges, testing LLMs' capabilities in complex programming tasks. Discover the impact on AI-assisted coding.
Enhancing GPU Efficiency: Understanding Global Memory Access in CUDA
Explore how efficient global memory access in CUDA can unlock GPU performance. Learn about coalesced memory patterns, profiling techniques, and best practices for optimizing CUDA kernels.
NVIDIA Enhances Vision AI with CUDA-Accelerated VC-6
NVIDIA introduces CUDA-accelerated VC-6 to optimize vision AI pipelines, leveraging GPU parallelism for high-performance data processing, reducing I/O bottlenecks, and enhancing AI application efficiency.
