Introducing Llama 4
The Llama 4 models mark the beginning of a new era for the Llama ecosystem, delivering the most scalable generation of Llama. With native multimodality, mixture-of-experts architecture, expanded context windows, significant performance improvements, and optimized computational efficiency, Llama 4 is engineered to address diverse application requirements. The Llama 4 models come in easy-to-deploy sizes, making them adaptable for various use cases.
Benefits
Meet Llama
For over the past decade, Meta has been focused on putting tools into the hands of developers and fostering collaboration and advancements among developers, researchers, and organizations. Llama models are available in a range of parameter sizes, enabling developers to select the model that best fits their needs and inference budget. Llama models in Amazon Bedrock open up a world of possibilities because developers don't need to worry about scalability or managing infrastructure. Amazon Bedrock is a turnkey way for developers to get started using Llama.
Use cases
Llama models excel at image understanding and visual reasoning, language nuances, contextual understanding, and complex tasks, such as visual data analysis, image captioning, dialogue generation, and translation, and can handle multistep tasks seamlessly. Additional use cases Llama models are a great fit for include sophisticated visual reasoning and understanding, image-text-retrieval, visual grounding, document visual question answering, text summarization and accuracy, text classification, sentiment analysis and nuance reasoning, language modeling, dialog systems, code generation, and following instructions.
Model versions
Llama 4 Maverick 17B
A general purpose model featuring 128 experts and 400 billion total parameters. It excels in text understanding across 12 languages and English image understanding, making it suitable for versatile assistant and chat applications.
Max tokens: 1MLanguages: English, French, German, Hindi, Italian, Portuguese, Spanish, Thai, Arabic, Indonesian, Tagalog, and Vietnamese; [image] English onlyFine-tuning supported: NoSupported use cases: High-quality multilingual assistant and chat applications with image understanding, coding assistance, and document understanding for structured data extraction, customer support with image analysis capabilities, creative content generation across languages, and research applications requiring text analysis and multimodal data integrationRead the blog
Llama 4 Scout 17B
A general purpose multimodal model with 16 experts, 17 billion active parameters, and 109 billion total parameters. Its multimillion context window enables comprehensive multi-document analysis, establishing it as a uniquely powerful and efficient model in its class.
Max tokens: 3.5M (10M coming soon)Languages: English, French, German, Hindi, Italian, Portuguese, Spanish, Thai, Arabic, Indonesian, Tagalog, and Vietnamese; [image] English onlyFine-tuning supported: No
Supported use cases: Chat applications requiring high-quality responses and image understanding in multilingual contexts, coding assistance and document intelligence for extracting structured data, customer support with image analysis capabilities, creative content generation across multiple languages, and research applications requiring multimodal data integrationRead the blog
Llama 3.3 70B
Text-only 70B instruction-tuned model that provides enhanced performance relative to Llama 3.1 70B–and to Llama 3.2 90B when used for text-only applications. Llama 3.3 70B delivers similar performance to Llama 3.1 405B while requiring only a fraction of the computational resources.Max tokens: 128KLanguages: English, German, French, Italian, Portuguese, Spanish, and ThaiFine-tuning supported: NoSupported use cases: Conversational AI designed for content creation, enterprise applications, and research, offering advanced language understanding capabilities, including text summarization, classification, sentiment analysis, and code generation. The model also supports the ability to leverage model outputs to improve other models including synthetic data generation and distillationRead the blog
Llama 3.2 90B
Multimodal model that takes both text and image inputs and outputs. Ideal for applications requiring sophisticated visual intelligence, such as image analysis, document processing, multimodal chatbots, and autonomous systems.
Max tokens: 128KLanguages: English, German, French, Italian, Portuguese, Hindi, Spanish, and ThaiFine-tuning supported: Yes
Supported use cases: Image understanding, visual reasoning, and multimodal interaction, enabling advanced applications such as image captioning, image-text retrieval, visual grounding, visual question answering, and document visual question answering, with a unique ability to reason and draw conclusions from visual and textual inputsRead the blog
Nomura uses Llama models from Meta in Amazon Bedrock to democratize generative AI
Aniruddh Singh, Nomura's Executive Director and Enterprise Architect, outlines the financial institution’s journey to democratize generative AI firm-wide using Amazon Bedrock and Llama models from Meta. Amazon Bedrock provides critical access to leading foundation models like Llama, enabling seamless integration. Llama offers key benefits to Nomura, including faster innovation, transparency, bias guardrails, and robust performance across text summarization, code generation, log analysis, and document processing.
TaskUs revolutionizes customer experiences using Llama models from Meta in Amazon Bedrock
TaskUs, a leading provider of outsourced digital services and next-generation customer experience to the world’s most innovative companies, helps its clients represent, protect, and grow their brands. Its innovative TaskGPT platform, powered by Amazon Bedrock and Llama models from Meta, empowers teammates to deliver exceptional service. TaskUs builds tools on TaskGPT that leverage Amazon Bedrock and Llama for cost-effective paraphrasing, content generation, comprehension, and complex task handling.
(Optional) Heading: Ask customers a direct, yes-or-no question
Subheading: If you need to, elaborate on the question