Upgraded and Uncensored: Mistral Overhauls Its AI Model

Upgraded and Uncensored: Mistral Overhauls Its AI Model

.Canadian AI developer Cohere also released an update to its Aya, touting its multilingual skills, joining Mistral and tech giant Meta in the open source arena.While Mistral runs on local hardware and will provide uncensored responses, it does include warnings when asked for potentially dangerous or illegal information. If asked how to break into a car, it responds, “To break into a car, you would need to use a variety of tools and techniques, some of which are illegal,” and along with instructions, adds, “This information should not be used for any illegal activities.”The latest Mistral release includes both The token context size of Mistral 7B v0.3 was expanded to 32,768 tokens, allowing the model to handle a broader range of words and phrases in its context and improving its performance on diverse texts. A new version of Mistral’s tokenizer offers more efficient text processing and understanding. For comparison, Meta’s Lllama has a token context size of 8K, although its vocabulary is much larger at 128K.Perhaps the most significant new feature is function calling, which allows the Mistral models to interact with external functions and APIs. This makes them highly versatile for tasks that involve creating agents or interacting with third-party tools.Function calling example: pic.twitter.com/po2kzCRGV7— Maziyar PANAHI (@MaziyarPanahi) May 22, 2024The ability to integrate Mistral AI into various systems and services could make the model highly appealing to consumer-facing apps and tools. Fore example, it can make it super easy for developers to set up different agents that interact with each other, search the web or specialized databases for information, write reports, or brainstorm ideas—all without sending personal data to centralized firms like Google or OpenAI.While Mistral did not provide benchmarks, the enhancements suggest improved performance over the previous version—potentially four times more capable based on vocabulary and token context capacity. Coupled with the vastly broadened capabilities function calling brings, the upgrade is a compelling release for the second most popular open-source AI LLM model on the market.In addition to Mistral’s release, Cohere, a Canadian AI startup, This slate of languages is intended to be able to serve nearly half of the world’s population, a bid toward more inclusive AI.Aya 23 – Powering a new era of multilingual AI research. Learn more at https://t.co/pNaz4VIJ19 pic.twitter.com/7Yku8SaXOx— Cohere For AI (@CohereForAI) May 24, 2024The model outperforms its predecessor, Aya 101, and other widely used models such as Mistral 7B v2 (not the newly released v3) and Aya 23 is Cohere says Aya 23 models are fine-tuned using a diverse multilingual instruction dataset—55.7 million examples from 161 different datasets—encompassing human-annotated, translated, and synthetic sources. In generative tasks like translation and summarization, Cohere The multilingual basis of Aya 23 ensures the models are well-equipped for various real-world applications and makes them a well-honed tool for multilingual AI projects.Edited by Ryan Ozawa.