Flash
April 6, 2025 11:31 PM
Meta Platforms has officially launched the Llama 4 series—its next-generation suite of AI models designed to elevate multimodal intelligence. Announced on April 6, the release includes three core variants: Llama 4 Scout, Llama 4 MAV, and Llama 4 Behemoth.
The models are built using extensive datasets that include unlabeled text, images, and video, allowing them to understand and respond to complex visual and contextual information. This marks a significant step forward from previous Llama generations, which primarily focused on text-based capabilities.
According to Meta, Llama 4 has already been integrated into Meta AI across 40 countries, although advanced multimodal functions are currently only available in English for U.S. users.
This update is part of Meta’s broader push to compete with OpenAI, Google DeepMind, and Anthropic in the global AI race. With Llama 4, Meta aims to strengthen its position in both consumer AI services and enterprise applications, leveraging its massive global user base across platforms like Facebook, Instagram, and WhatsApp.
Disclaimer: Backdoor provides informational content only, it is not offered or intended to be used as legal, tax, investment, financial, or other advice. Investments in digital assets involve risk, and past performance does not guarantee future results. We recommend conducting your own research before making any investment decisions.