LLM News - Blockchain.News

DEEPSEEK

AutoJudge Revolutionizes LLM Inference with Enhanced Token Processing
deepseek

AutoJudge Revolutionizes LLM Inference with Enhanced Token Processing

AutoJudge introduces a novel method to accelerate large language model inference by optimizing token processing, reducing human annotation needs, and improving processing speed with minimal accuracy loss.

NVIDIA's ComputeEval 2025.2 Challenges LLMs with Advanced CUDA Tasks
deepseek

NVIDIA's ComputeEval 2025.2 Challenges LLMs with Advanced CUDA Tasks

NVIDIA expands ComputeEval with 232 new CUDA challenges, testing LLMs' capabilities in complex programming tasks. Discover the impact on AI-assisted coding.

Generative AI Revolutionizes Legal Services with Custom LLMs
deepseek

Generative AI Revolutionizes Legal Services with Custom LLMs

Harvey's custom LLMs are transforming legal services by addressing complex legal challenges across various jurisdictions and practice areas, enhancing efficiency and accuracy.

Unsloth Simplifies LLM Training on NVIDIA Blackwell GPUs
deepseek

Unsloth Simplifies LLM Training on NVIDIA Blackwell GPUs

Unsloth's open-source framework enables efficient LLM training on NVIDIA Blackwell GPUs, democratizing AI development with faster throughput and reduced VRAM usage.

ATLAS: Revolutionizing LLM Inference with Adaptive Learning
deepseek

ATLAS: Revolutionizing LLM Inference with Adaptive Learning

Together.ai introduces ATLAS, a system enhancing LLM inference speed by adapting to workloads, achieving 500 TPS on DeepSeek-V3.1.

NVIDIA AI Red Team Offers Critical Security Insights for LLM Applications
deepseek

NVIDIA AI Red Team Offers Critical Security Insights for LLM Applications

NVIDIA's AI Red Team has identified key vulnerabilities in AI systems, offering practical advice to enhance security in LLM applications, focusing on code execution, access control, and data exfiltration.

Enhancing LLM Inference with NVIDIA Run:ai and Dynamo Integration
deepseek

Enhancing LLM Inference with NVIDIA Run:ai and Dynamo Integration

NVIDIA's Run:ai v2.23 integrates with Dynamo to address large language model inference challenges, offering gang scheduling and topology-aware placement for efficient, scalable deployments.

NVIDIA's Run:ai Model Streamer Enhances LLM Inference Speed
deepseek

NVIDIA's Run:ai Model Streamer Enhances LLM Inference Speed

NVIDIA introduces the Run:ai Model Streamer, significantly reducing cold start latency for large language models in GPU environments, enhancing user experience and scalability.

Enhancing LLM Inference with CPU-GPU Memory Sharing
deepseek

Enhancing LLM Inference with CPU-GPU Memory Sharing

NVIDIA introduces a unified memory architecture to optimize large language model inference, addressing memory constraints and improving performance.

Solana (SOL) Bench: Evaluating LLMs' Competence in Crypto Transactions
deepseek

Solana (SOL) Bench: Evaluating LLMs' Competence in Crypto Transactions

Solana (SOL) introduces Solana Bench, a tool to assess the effectiveness of LLMs in executing complex crypto transactions on the Solana blockchain.

Trending topics