CRYPTOCURRENCY
Mistral.ai Announces New Free API, Pricing Updates, and Enhanced Models
Mistral.ai unveils a free API tier, improved pricing, a new enterprise-grade Mistral Small, and free vision capabilities on le Chat.
NVIDIA Unveils Mistral-NeMo-Minitron 8B: Compact Language Model with High Accuracy
NVIDIA releases Mistral-NeMo-Minitron 8B, a compact language model delivering state-of-the-art accuracy, optimized for various AI applications.
NVIDIA Unveils Mistral-NeMo-Minitron 8B Model with Superior Accuracy
NVIDIA's new Mistral-NeMo-Minitron 8B model demonstrates superior accuracy across nine benchmarks, utilizing advanced pruning and distillation techniques.
NVIDIA and Mistral Launch NeMo 12B: A High-Performance Language Model on a Single GPU
NVIDIA and Mistral have developed NeMo 12B, a high-performance language model optimized to run on a single GPU, enhancing text-generation applications.
Mistral AI and NVIDIA Introduce Mistral NeMo 12B, a Cutting-Edge Enterprise AI Model
Mistral AI and NVIDIA unveil Mistral NeMo 12B, a customizable and deployable enterprise AI model for chatbots, multilingual tasks, coding, and summarization.
NVIDIA Unveils New NIMs for Mistral and Mixtral AI Models
NVIDIA introduces new NIMs for Mistral and Mixtral models, enhancing AI project deployment with optimized performance and scalability.
Mistral AI Launches Codestral: A Generative AI Model for Code Generation
Mistral AI introduces Codestral, a model designed to revolutionize code generation.
Mistral AI Unveils Non-Production License to Foster Openness and Sustainable Growth
Mistral AI launches Non-Production License to balance innovation and business growth.
Theta EdgeCloud to Launch with Meta Llama 2, Google Gemma, Stable Diffusion, and Other Popular AI Models
Theta Labs is launching EdgeCloud, a platform offering open-source AI models, aiming to revolutionize the AI industry by providing developers with access to cutting-edge technologies.
Mixtral 8x7B: Elevating Language Modeling with Expert Architecture
Mixtral 8x7B, a Sparse Mixture of Experts model, outperforms leading AI models in efficiency and multilingual tasks, offering reduced bias and broad accessibility under Apache 2.0 license.