Mistral AI Unveils Ministral 3B and 8B Models for Edge Computing

Ted Hisokawa  Oct 16, 2024 22:14  UTC 14:14

0 Min Read

Mistral AI has announced the launch of two new models, Ministral 3B and Ministral 8B, designed specifically for on-device computing and edge use cases, according to Mistral AI. These models were introduced on the first anniversary of the Mistral 7B release, which marked a significant milestone in frontier AI innovation.

Advanced Features and Use Cases

The Ministral models are engineered to excel in areas such as knowledge, commonsense reasoning, function-calling, and efficiency within the sub-10B category. They offer an extensive context length of up to 128k, with the Ministral 8B featuring a unique interleaved sliding-window attention pattern for enhanced speed and memory efficiency. These capabilities make the models suitable for a wide range of applications, including on-device translation, internet-less smart assistants, local analytics, and autonomous robotics.

In collaboration with larger language models like Mistral Large, the Ministral models serve as efficient intermediaries in complex workflows, capable of parsing inputs, routing tasks, and calling APIs with minimal latency and cost. This positions them as ideal solutions for both independent developers and large-scale manufacturing teams seeking privacy-first, low-latency inference solutions.

Performance and Benchmarks

Mistral AI has benchmarked the performance of Ministral 3B and 8B against other models, including Gemma 2 2B, Llama 3.2 3B, and Mistral 7B. The results demonstrate that the Ministral models consistently outperform their peers across various tasks. These evaluations underscore the models' capabilities in handling diverse and complex scenarios efficiently.

Availability and Pricing

Both models are now available, with pricing set at $0.1 per million tokens for Ministral 8B and $0.04 per million tokens for Ministral 3B. The models are offered under Mistral's Commercial and Research licenses, with options for self-deployment through commercial licenses and support for lossless quantization to optimize performance for specific use cases. Additionally, the model weights for Ministral 8B Instruct are accessible for research purposes.

Future Prospects

Mistral AI continues to innovate in frontier AI models, with a commitment to pushing the boundaries of what's possible in edge computing. Since the release of Mistral 7B, the company has made significant strides, as evidenced by the superior performance of the new Ministral 3B model. Mistral AI looks forward to receiving feedback from users as they explore the capabilities of the Ministral models.



Read More