🔔
🎄
🎁
🦌
🛷
NEW
What is gpu? gpu news, gpu meaning, gpu definition - Blockchain.News
Search results for

gpu

NVIDIA Achieves Record Performance in Latest MLPerf Training Benchmarks

NVIDIA Achieves Record Performance in Latest MLPerf Training Benchmarks

NVIDIA's accelerated computing platform sets new records in MLPerf Training v4.0 benchmarks.

Enhanced AI Performance with NVIDIA TensorRT 10.0's Weight-Stripped Engines

Enhanced AI Performance with NVIDIA TensorRT 10.0's Weight-Stripped Engines

NVIDIA introduces TensorRT 10.0 with weight-stripped engines, offering >95% compression for AI apps.

How to Build Your Own Coding Copilot with AMD Radeon GPU Platform

How to Build Your Own Coding Copilot with AMD Radeon GPU Platform

Learn how to build a coding Copilot using AMD Radeon GPUs and open-source tools.

NVIDIA Enhances RDMA Performance with DOCA GPUNetIO

NVIDIA Enhances RDMA Performance with DOCA GPUNetIO

NVIDIA introduces GPU-accelerated RDMA with DOCA GPUNetIO, boosting data transfer speeds.

NVIDIA CUDA Toolkit 12.4 Enhances Runtime Fatbin Creation

NVIDIA CUDA Toolkit 12.4 Enhances Runtime Fatbin Creation

NVIDIA CUDA Toolkit 12.4 introduces the nvFatbin library, streamlining the creation of fatbins at runtime and enhancing GPU code compatibility.

AMD Unveils ROCm 6.1 Software for Radeon GPUs, Enhancing AI Development

AMD Unveils ROCm 6.1 Software for Radeon GPUs, Enhancing AI Development

AMD releases ROCm 6.1 software, extending AI capabilities to Radeon desktop GPUs, enabling scalable AI solutions and broader support for AI frameworks.

Streamlining AI Development: Brev.dev Integrates with NVIDIA NGC Catalog for One-Click GPU Deployment

Streamlining AI Development: Brev.dev Integrates with NVIDIA NGC Catalog for One-Click GPU Deployment

Brev.dev and NVIDIA NGC Catalog simplify AI development with one-click deployment of GPU-optimized software, enhancing efficiency and reducing setup time.

NVIDIA Unveils NVDashboard v0.10 with Enhanced GPU Monitoring Features

NVIDIA Unveils NVDashboard v0.10 with Enhanced GPU Monitoring Features

NVIDIA's NVDashboard v0.10 introduces WebSocket data streaming, improved usability, and theme support, enhancing real-time GPU monitoring in JupyterLab.

Modelserve: Golem Network's New AI Inference Service

Modelserve: Golem Network's New AI Inference Service

Golem Network introduces Modelserve, a scalable and cost-effective AI model inference service designed for developers and startups.

NVIDIA Delves into RAPIDS cuVS IVF-PQ for Accelerated Vector Search

NVIDIA Delves into RAPIDS cuVS IVF-PQ for Accelerated Vector Search

NVIDIA explores the RAPIDS cuVS IVF-PQ algorithm, enhancing vector search performance through compression and GPU acceleration.

NVIDIA H100 GPUs and TensorRT-LLM Achieve Breakthrough Performance for Mixtral 8x7B

NVIDIA H100 GPUs and TensorRT-LLM Achieve Breakthrough Performance for Mixtral 8x7B

NVIDIA's H100 Tensor Core GPUs and TensorRT-LLM software demonstrate record-breaking performance for the Mixtral 8x7B model, leveraging FP8 precision.

OKX Ventures Backs Compute Labs in Tokenized GPU Market Initiative

OKX Ventures Backs Compute Labs in Tokenized GPU Market Initiative

OKX Ventures has invested in Compute Labs to advance the tokenized GPU market, enhancing accessibility to compute revenue through blockchain technology.

NVIDIA Fully Adopts Open-Source GPU Kernel Modules in Upcoming R560 Driver Release

NVIDIA Fully Adopts Open-Source GPU Kernel Modules in Upcoming R560 Driver Release

NVIDIA transitions to open-source GPU kernel modules with the R560 driver release, enhancing performance and support for modern GPUs.

Golem Network Unveils Golem-Workers API for Enhanced Computational Flexibility

Golem Network Unveils Golem-Workers API for Enhanced Computational Flexibility

Golem Network introduces Golem-Workers API, offering high-level access to GPU and CPU resources, catering to diverse computational needs beyond AI model deployment.

Golem Network Unveils Updated AI/GPU Roadmap

Golem Network Unveils Updated AI/GPU Roadmap

Golem Network announces an updated AI/GPU roadmap focusing on market-validated initiatives, enhancing GPU resource supply for AI industry needs.

NVIDIA Enhances Meta's Llama 3.1 with Advanced GPU Optimization

NVIDIA Enhances Meta's Llama 3.1 with Advanced GPU Optimization

NVIDIA collaborates with Meta to optimize Llama 3.1 across its GPU platforms, enhancing performance and safety for developers.

NVIDIA and Mistral Launch NeMo 12B: A High-Performance Language Model on a Single GPU

NVIDIA and Mistral Launch NeMo 12B: A High-Performance Language Model on a Single GPU

NVIDIA and Mistral have developed NeMo 12B, a high-performance language model optimized to run on a single GPU, enhancing text-generation applications.

Luminary Cloud Accelerates Engineering Simulations with NVIDIA GPUs

Luminary Cloud Accelerates Engineering Simulations with NVIDIA GPUs

Luminary Cloud leverages NVIDIA GPUs to speed up engineering simulations, addressing industry challenges and enhancing productivity.

AMD Instinct MI300X Accelerators Boost Performance for Large Language Models

AMD Instinct MI300X Accelerators Boost Performance for Large Language Models

AMD's MI300X accelerators, with high memory bandwidth and capacity, enhance the performance and efficiency of large language models.

NVIDIA Introduces Advanced Shader Debugger in Nsight Graphics

NVIDIA Introduces Advanced Shader Debugger in Nsight Graphics

NVIDIA's new Shader Debugger in Nsight Graphics offers real-time debugging for complex shaders, enhancing GPU debugging capabilities.

Optimizing GPU Clusters for Generative AI Model Training: A Comprehensive Guide

Optimizing GPU Clusters for Generative AI Model Training: A Comprehensive Guide

Explore the intricacies of testing and running large GPU clusters for generative AI model training, ensuring high performance and reliability.

Enhancing GPU Performance: Tackling Instruction Cache Misses

Enhancing GPU Performance: Tackling Instruction Cache Misses

NVIDIA explores optimizing GPU performance by reducing instruction cache misses, focusing on a genomics workload using the Smith-Waterman algorithm.

NVIDIA AI Workbench Simplifies GPU Utilization on Windows

NVIDIA AI Workbench Simplifies GPU Utilization on Windows

NVIDIA's AI Workbench streamlines data science, ML, and AI projects across PCs, workstations, datacenters, and cloud environments.

CoreWeave Leads AI Infrastructure with NVIDIA H200 Tensor Core GPUs

CoreWeave Leads AI Infrastructure with NVIDIA H200 Tensor Core GPUs

CoreWeave becomes the first cloud provider to offer NVIDIA H200 Tensor Core GPUs, advancing AI infrastructure performance and efficiency.

Enhancing CUDA Efficiency: Key Techniques for Aspiring Developers

Enhancing CUDA Efficiency: Key Techniques for Aspiring Developers

Discover essential techniques to optimize NVIDIA CUDA performance, tailored for new developers, as explained by NVIDIA experts.

Together AI Boosts NVIDIA H200 and H100 GPU Cluster Performance with Kernel Collection

Together AI Boosts NVIDIA H200 and H100 GPU Cluster Performance with Kernel Collection

Together AI enhances NVIDIA H200 and H100 GPU clusters with its Together Kernel Collection, offering significant performance improvements in AI training and inference.

NVIDIA Introduces NVSHMEM 3.0 with Enhanced GPU Communication Features

NVIDIA Introduces NVSHMEM 3.0 with Enhanced GPU Communication Features

NVIDIA's NVSHMEM 3.0 offers multi-node support, ABI backward compatibility, and CPU-assisted InfiniBand GPU Direct Async, enhancing GPU communication.

NVIDIA's GeForce 256: The GPU Revolutionizing Gaming and AI

NVIDIA's GeForce 256: The GPU Revolutionizing Gaming and AI

Explore how NVIDIA's GeForce 256 GPU, launched in 1999, transformed gaming and paved the way for advancements in AI, influencing technology and entertainment globally.

AMD Unveils ROCm 6.2.3 Enhancing AI Performance on Radeon GPUs

AMD Unveils ROCm 6.2.3 Enhancing AI Performance on Radeon GPUs

AMD releases ROCm 6.2.3, boosting AI capabilities for Radeon GPUs with enhanced support for Llama 3, Stable Diffusion, and Triton framework, improving AI development efficiency.

NVIDIA GPUs Revolutionize Quantum Dynamics Simulations

NVIDIA GPUs Revolutionize Quantum Dynamics Simulations

Researchers utilize NVIDIA GPUs to enhance quantum dynamics simulations, overcoming computational challenges and enabling advancements in quantum computing and material science.

NVIDIA's AI and RTX GPUs Revolutionize Reality Capture

NVIDIA's AI and RTX GPUs Revolutionize Reality Capture

NVIDIA leverages AI and RTX GPUs to enhance reality capture technologies like NeRFs and Gaussian splatting, improving 3D modeling and visualization processes.

Render Network Enhances Cinema 4D with Redshift Support for Superior GPU Rendering

Render Network Enhances Cinema 4D with Redshift Support for Superior GPU Rendering

Render Network introduces Redshift support to its Cinema 4D Wizard, offering enhanced GPU rendering capabilities for artists. Explore the latest features and integration details.

NVIDIA's cuOpt Revolutionizes Linear Programming with GPU Acceleration

NVIDIA's cuOpt Revolutionizes Linear Programming with GPU Acceleration

NVIDIA's cuOpt leverages GPU technology to drastically accelerate linear programming, achieving performance up to 5,000 times faster than traditional CPU-based solutions.

Llama 3.1 405B Achieves 1.5x Throughput Boost with NVIDIA H200 GPUs and NVLink

Llama 3.1 405B Achieves 1.5x Throughput Boost with NVIDIA H200 GPUs and NVLink

NVIDIA's latest advancements in parallelism techniques enhance Llama 3.1 405B throughput by 1.5x, using NVIDIA H200 Tensor Core GPUs and NVLink Switch, improving AI inference performance.

Building a Free Whisper API with GPU Backend: A Comprehensive Guide

Building a Free Whisper API with GPU Backend: A Comprehensive Guide

Discover how developers can create a free Whisper API using GPU resources, enhancing Speech-to-Text capabilities without the need for expensive hardware.

NVIDIA's cuGraph Enhances NetworkX with GPU Acceleration

NVIDIA's cuGraph Enhances NetworkX with GPU Acceleration

NVIDIA introduces GPU acceleration for NetworkX using cuGraph, offering significant speed improvements in graph analytics without code changes, ideal for large-scale data processing.

Boosting LLM Performance on RTX: Leveraging LM Studio and GPU Offloading

Boosting LLM Performance on RTX: Leveraging LM Studio and GPU Offloading

Explore how GPU offloading with LM Studio enables efficient local execution of large language models on RTX-powered systems, enhancing AI applications' performance.

NVIDIA's GPU Innovations Revolutionize Drug Discovery Simulations

NVIDIA's GPU Innovations Revolutionize Drug Discovery Simulations

NVIDIA's latest GPU optimization techniques, including CUDA Graphs and C++ coroutines, promise to accelerate pharmaceutical research by enhancing molecular dynamics simulations.

Blender Cycles Joins Render Network's Closed Beta for Decentralized Rendering

Blender Cycles Joins Render Network's Closed Beta for Decentralized Rendering

Blender Cycles has been integrated into the Render Network's closed beta, offering decentralized GPU rendering to millions of artists. This marks a significant step in expanding Render's multi-render capabilities.

Manta Network Partners with Aethir to Enhance Ecosystem with High-Performance GPU Access

Manta Network Partners with Aethir to Enhance Ecosystem with High-Performance GPU Access

Manta Network collaborates with Aethir to provide high-performance GPU access for applications, enhancing scalability and cost-efficiency within the ecosystem, particularly benefiting AI and gaming sectors.

Enhancing Protein Structure Prediction with GPU-Accelerated MMseqs2

Enhancing Protein Structure Prediction with GPU-Accelerated MMseqs2

Explore how GPU-accelerated MMseqs2 enhances protein structure prediction, offering faster, scalable, and cost-effective solutions for researchers in computational biology.

NVIDIA RAPIDS 24.10 Enhances NetworkX and Polars with GPU Acceleration

NVIDIA RAPIDS 24.10 Enhances NetworkX and Polars with GPU Acceleration

NVIDIA RAPIDS 24.10 introduces GPU-accelerated NetworkX and Polars with zero code changes, enhancing compatibility with Python 3.12 and NumPy 2.x for improved data processing.

Enhanced UMAP Performance on GPUs with RAPIDS cuML

Enhanced UMAP Performance on GPUs with RAPIDS cuML

RAPIDS cuML introduces a faster, scalable UMAP implementation using GPU acceleration, addressing challenges in large dataset processing with new algorithms for improved performance.

NVIDIA's TensorRT-LLM MultiShot Enhances AllReduce Performance with NVSwitch

NVIDIA's TensorRT-LLM MultiShot Enhances AllReduce Performance with NVSwitch

NVIDIA introduces TensorRT-LLM MultiShot to improve multi-GPU communication efficiency, achieving up to 3x faster AllReduce operations by leveraging NVSwitch technology.

Accelerating Causal Inference with NVIDIA RAPIDS and cuML

Accelerating Causal Inference with NVIDIA RAPIDS and cuML

Discover how NVIDIA RAPIDS and cuML enhance causal inference by leveraging GPU acceleration for large datasets, offering significant speed improvements over traditional CPU-based methods.

NVIDIA's cuPyNumeric Enhances GPU Acceleration for Scientific Research

NVIDIA's cuPyNumeric Enhances GPU Acceleration for Scientific Research

NVIDIA unveils cuPyNumeric, a library that accelerates data analysis by utilizing GPUs, aiding scientists in processing vast datasets efficiently and scaling computations effortlessly.

NVIDIA and Windows 365: Enhancing AI Workloads with GPU Acceleration

NVIDIA and Windows 365: Enhancing AI Workloads with GPU Acceleration

NVIDIA and Windows 365 collaborate to enhance AI workloads with GPU acceleration, offering significant performance boosts for AI-driven applications across various sectors.

Enhancing Data Deduplication with RAPIDS cuDF: A GPU-Driven Approach

Enhancing Data Deduplication with RAPIDS cuDF: A GPU-Driven Approach

Explore how NVIDIA's RAPIDS cuDF optimizes deduplication in pandas, offering GPU acceleration for enhanced performance and efficiency in data processing.

Enhancing GPU Workloads with NVIDIA Nsight Graphics 2024.3

Enhancing GPU Workloads with NVIDIA Nsight Graphics 2024.3

NVIDIA Nsight Graphics 2024.3 introduces new features for optimizing GPU workloads, focusing on shader performance and reducing thread divergence in graphics applications.

Warp 1.5.0 Introduces Tile-Based Programming for Enhanced GPU Efficiency

Warp 1.5.0 Introduces Tile-Based Programming for Enhanced GPU Efficiency

Warp 1.5.0 launches tile-based programming in Python, leveraging cuBLASDx and cuFFTDx for efficient GPU operations, significantly improving performance in scientific computing and simulation.

Optimizing Multi-GPU Data Analysis with RAPIDS and Dask

Optimizing Multi-GPU Data Analysis with RAPIDS and Dask

Explore best practices for leveraging RAPIDS and Dask in multi-GPU data analysis, addressing memory management, computing efficiency, and accelerated networking.

NVIDIA Launches cuPQC for Enhanced GPU-Accelerated Post-Quantum Cryptography

NVIDIA Launches cuPQC for Enhanced GPU-Accelerated Post-Quantum Cryptography

NVIDIA introduces cuPQC, a GPU-accelerated software development kit, aimed at bolstering post-quantum cryptography for higher security against potential quantum computer threats.

NVIDIA's RAPIDS cuDF Enhances pandas Through Unified Virtual Memory

NVIDIA's RAPIDS cuDF Enhances pandas Through Unified Virtual Memory

NVIDIA's RAPIDS cuDF utilizes Unified Virtual Memory to boost pandas' performance by 50x, offering seamless integration with existing workflows and GPU acceleration.

Microsoft Develops Secret AI Chips to Reduce Development Costs

Microsoft Develops Secret AI Chips to Reduce Development Costs

Microsoft has been developing its own AI chips since 2019 to reduce reliance on Nvidia’s GPUs due to rising costs. The project, called “Athena,” is already being tested by Microsoft’s machine-learning staff and OpenAI developers.

NVIDIA Acquires GPU Orchestration Software Provider Run:ai for $700 Million

NVIDIA Acquires GPU Orchestration Software Provider Run:ai for $700 Million

NVIDIA, a leading technology company, has announced its acquisition of Run:ai, an Israeli startup that specializes in GPU orchestration software. The acquisition aims to enhance NVIDIA's capabilities in managing and optimizing AI computing resources across various environments. The terms of the deal, including the acquisition cost, have not been publicly disclosed.

Theta EdgeCloud Set to Revolutionize AI Computing with Decentralized GPU Power

Theta EdgeCloud Set to Revolutionize AI Computing with Decentralized GPU Power

Theta EdgeCloud is poised to transform AI computing by offering unprecedented access to decentralized GPU resources for AI and video tasks.

Nvidia's Soaring Data Center Revenue Signals Strong AI and GPU Market Position

Nvidia's Soaring Data Center Revenue Signals Strong AI and GPU Market Position

Nvidia's Q3 fiscal 2024 results show a 279% increase in data center revenue and a 206% overall revenue increase to $18.12 billion, highlighting its success in AI and GPU markets. Analysts predict further growth in 2024.

NFT Marketplace SudoRare Sees Rug Pull Hours After Going Live

NFT Marketplace SudoRare Sees Rug Pull Hours After Going Live

SudoRare, a hybrid NFT platform, has been involved in a rug pull barely six hours after the trading platform went live.

Voltage Park's $1 Billion Cloud Infrastructure Targets ML Compute Shortage

Voltage Park's $1 Billion Cloud Infrastructure Targets ML Compute Shortage

Voltage Park, launching on October 29, 2023, introduced a $1 billion cloud infrastructure housing around 24,000 NVIDIA H100 GPUs, aimed at redressing the ML compute resource crunch. A subsidiary of the Navigation Fund, Voltage Park aspires to democratize ML compute access, thereby catalyzing AI innovation across the spectrum.

Elon Musk Moves Forward with AI Plans for Twitter

Elon Musk Moves Forward with AI Plans for Twitter

Elon Musk’s recent purchase of nearly 10,000 graphics processing units (GPUs) indicates his commitment to an AI project at Twitter. The project is reportedly in its early stages and uses a large language model. Musk has previously expressed concerns about AI and signed an open letter to halt its development.

HIVE Blockchain Exploring GPU Mineable Coins Ahead of Ethereum Merge

HIVE Blockchain Exploring GPU Mineable Coins Ahead of Ethereum Merge

For the upcoming Ethereum merge, HIVE said it has already started exploring mining other GPU mineable coins with its fleet of GPUs. The miner said it is implementing beta-testing this week, before the merge.

Crypto Miner HIVE Blockchain Rebrands to Drive AI Expansion

Crypto Miner HIVE Blockchain Rebrands to Drive AI Expansion

HIVE Blockchain Technologies Ltd. has declared its plan to strategically rebrand, including a transition to the new name "HIVE Digital Technologies Ltd." This rebranding initiative is designed to mirror the company's progression into the realm of high-performance computing (HPC) data centres, utilizing Nvidia's superior Graphics Processing Unit (GPU) chips to tap into the burgeoning trend of widespread adoption in the field of Artificial Intelligence (AI).

SK Hynix Reports Q4 Profit and Plans for AI GPU Chips

SK Hynix Reports Q4 Profit and Plans for AI GPU Chips

SK Hynix reported a Q4 2023 operating profit of 346 billion won, marking its first profit since Q3 2022, driven by high-end AI chip demand. However, profit-taking led to a 2.6% share drop.

Crypto Mining Hive Signed a $66 million GPU Subscription Agreement with Nvidia

Crypto Mining Hive Signed a $66 million GPU Subscription Agreement with Nvidia

Hive Blockchain Technologies Ltd signed a Graphics Processing Unit (GPU) procurement contract worth $66 million with a top-notch GPU inventor Nvidia Thursday. The announcement has come officially to join the Nvidia Partner Network (NPN) cloud service provider plan.

Microsoft D3D12 Work Graphs Elevate GPU Autonomy in Gaming and Rendering

Microsoft D3D12 Work Graphs Elevate GPU Autonomy in Gaming and Rendering

Microsoft's official release of D3D12 Work Graphs marks a significant advance in GPU-driven rendering, optimizing resource management and enabling more efficient algorithms.

Ethereum 2.0 Developer Says Phase 0 Most Likely Delayed Until 2021, Vitalik Buterin Disagrees

Ethereum 2.0 Developer Says Phase 0 Most Likely Delayed Until 2021, Vitalik Buterin Disagrees

Justin Drake, an Ethereum Foundation Researcher working on Phase 0 of Ethereum 2.0 said the project will not go live until 2021. The announcement of the delay appeared to confuse Vitalik Buterin who disagreed that the upgrade to the network could not meet its 2020 deadline.

Exclusive: Talent Shortage is The Key Pain Point in the AI Industry

Exclusive: Talent Shortage is The Key Pain Point in the AI Industry

Artificial intelligence is one of the emerging fintech trends in Hong Kong. While there are some great minds and AI companies in Hong Kong, they can’t survive without sufficient funding and technical support from government and industry leaders. We had the opportunity to invite Timothy Leung, Executive Director of HKAI Lab, to share his thoughts on how HKAI Lab facilitates the growth of AI ecosystem in Hong Kong. He also identified talent shortage is the key pain point in AI industry which hinders integration of AI and blockchain.

Trending topics