Home >Technology peripherals >AI >Amazon Nova Models: A Guide With Examples

Amazon Nova Models: A Guide With Examples

Lisa Kudrow
Lisa KudrowOriginal
2025-03-02 09:08:09297browse

Amazon has launched a new generation of cutting-edge basic model suites designed to enable cost-effective and large-scale use. Nova is now joining Amazon’s LLM ecosystem, integrated into its Amazon Bedrock service, and supports multiple modes such as text, image and video generation.

This article will outline the new Amazon Nova models, explain how to access them through the Bedrock service, highlight the capabilities and benefits of each model, and demonstrate its practical applications, including integration into multi-agent applications.

What is the Amazon Nova model?

Amazon's Nova model is the highly anticipated underlying model accessible through the Amazon Bedrock service. They are designed for a wide range of applications, including low-cost, fast reasoning, multimedia understanding, and creative content generation. Let's explore each model.

Amazon Nova Micro

The fastest model in the series with the highest speed and low computing costs. For applications that require fast, text-only generation, Micro is best with inference speeds of 200 tokens per second.

Some of Micro's best applications include real-time analytics, interactive chatbots, and high-traffic text generation services.

Amazon Nova Models: A Guide With Examples

Nova Micro benchmark. (Source: Amazon)

Amazon Nova Lite

The cost-effective multimodal member of the Nova family, Lite strikes a good balance between speed and high accuracy across multiple tasks, especially in the inference and translation tasks compared to similar products like GPT-4o or Llama.

It can handle a large number of requests efficiently while maintaining high accuracy. For applications where speed is very important and require models that can handle multiple modalities, Lite may be the best choice.

Amazon Nova Models: A Guide With Examples

Nova Lite benchmark. (Source: Amazon)

Amazon Nova Pro

The most advanced model for text processing in the Nova family, Nova Pro provides impressive accuracy while at a relatively low computational cost compared to models with similar capabilities.

According to Amazon, Nova Pro is ideal for applications such as video summary, question and answer, mathematical reasoning, software development, and AI agents that can perform multi-step workflows. Like the Micro and Lite models, Nova Pro currently supports fine-tuning.

Amazon Nova Models: A Guide With Examples

Nova Pro benchmark. (Source: Amazon)

Amazon Nova Premier

The most powerful multimodal model in this series, expected to be launched in early 2025, is expected to be an upgraded version of the Pro model.

Amazon Nova Canvas

Canvas is Nova's image generation solution. It can generate high-quality images, control color schemes and styles, and provide features such as repair, extended images, style conversion and background removal. This model seems to be very effective for creating marketing images, product mockups, and more.

Amazon Nova Reel

Nova Reel is a video generation model designed to provide high-quality and easy to customize video output. Nova Reel enables users to create and control visual style, rhythm, and camera movement in video. Like other Nova models, Reel comes with built-in security controls for consistent content generation.

How Amazon Nova Reel accesses Amazon Nova models through Amazon Bedrock Playground

You can use Amazon Bedrock Playground to test and compare multiple models with an ready-to-use user interface.

I assume you have configured the Amazon CLI and Bedrock and available. If not, you can refer to my tutorial on AWS Multi-Agent Coordinator, where I detailed the steps to set up an environment using the model provided by the Bedrock service. Additionally, Nils Durner’s blog post provides step-by-step screenshots that guide you through the process of setting up your Bedrock service.

Amazon Nova Models: A Guide With Examples

Amazon Bedrock Playground

When comparing Nova Micro and Pro, I noticed that the accuracy gap between the two models is not obvious. While Micro is more than twice as fast as Pro, it provides enough answers for most regular use cases. On the other hand, Pro tends to produce slightly more detailed and longer responses.

How to access Amazon Nova models through the Amazon Bedrock API

To use the Nova model through the API and integrate it into your code, first make sure that your AWS account, AWS CLI, and access to the model are set up correctly (the documentation provides guidance for this).

Next, install the boto3 library, the Python SDK for AWS, which allows you to use their models.

<code>pip install boto3</code>

You can programmatically interact with the model using a script as shown below:

<code>import boto3
import json 

client = boto3.client(service_name="bedrock-runtime")

messages = [
    {"role": "user", "content": [{"text": "Write a short poem"}]},
]

model_response = client.converse(
    modelId="us.amazon.nova-lite-v1:0", 
    messages=messages
)

print("\n[Full Response]")
print(json.dumps(model_response, indent=2))

print("\n[Response Content Text]")
print(model_response["output"]["message"]["content"][0]["text"])</code>

Demo project using Nova Micro and AWS Multi-Agent Coordinator

We now implement a demo project to test the agent capabilities of Nova Micro. We will use the AWS Multi-Proxy Coordinator framework to set up a simplified Python application that contains two proxies: Python Developer Agent and ML Expert Agent. If you want to set up a coordinator, you can use this AWS Multi-Proxy Coordinator Guide.

We will also use Chainlit (an open source Python package) to implement a simple UI for the application. First, install the necessary libraries:

<code>chainlit==1.2.0
multi_agent_orchestrator==0.0.18</code>

We first import the necessary libraries:

<code>import uuid
import chainlit as cl
from multi_agent_orchestrator.orchestrator import MultiAgentOrchestrator, OrchestratorConfig
from multi_agent_orchestrator.classifiers import BedrockClassifier, BedrockClassifierOptions
from multi_agent_orchestrator.agents import AgentResponse
from multi_agent_orchestrator.agents import BedrockLLMAgent, BedrockLLMAgentOptions, AgentCallbacks
from multi_agent_orchestrator.orchestrator import MultiAgentOrchestrator
from multi_agent_orchestrator.types import ConversationMessage
import asyncio
import chainlit as cl</code>

This framework uses a classifier to select the best proxy for incoming user requests. We use "anthropic.claude-3-haiku-20240307-v1:0" as the model of the classifier.

<code>class ChainlitAgentCallbacks(AgentCallbacks):
    def on_llm_new_token(self, token: str) -> None:
        asyncio.run(cl.user_session.get("current_msg").stream_token(token))

# Initialize the orchestrator
custom_bedrock_classifier = BedrockClassifier(BedrockClassifierOptions(
    model_id='anthropic.claude-3-haiku-20240307-v1:0',
    inference_config={
        'maxTokens': 500,
        'temperature': 0.7,
        'topP': 0.9
    }
))

orchestrator = MultiAgentOrchestrator(options=OrchestratorConfig(
        LOG_AGENT_CHAT=True,
        LOG_CLASSIFIER_CHAT=True,
        LOG_CLASSIFIER_RAW_OUTPUT=True,
        LOG_CLASSIFIER_OUTPUT=True,
        LOG_EXECUTION_TIMES=True,
        MAX_RETRIES=3,
        USE_DEFAULT_AGENT_IF_NONE_IDENTIFIED=False,
        MAX_MESSAGE_PAIRS_PER_AGENT=10,
    ),
    classifier=custom_bedrock_classifier
)</code>

Next, we define two agents powered by Nova Micro, one acting as a Python developer expert and the other acting as a machine learning expert.

<code>pip install boto3</code>

Finally, we set the body of the script so that the Chainlit UI can handle user requests and proxy responses.

<code>import boto3
import json 

client = boto3.client(service_name="bedrock-runtime")

messages = [
    {"role": "user", "content": [{"text": "Write a short poem"}]},
]

model_response = client.converse(
    modelId="us.amazon.nova-lite-v1:0", 
    messages=messages
)

print("\n[Full Response]")
print(json.dumps(model_response, indent=2))

print("\n[Response Content Text]")
print(model_response["output"]["message"]["content"][0]["text"])</code>

The result is Chainlit UI, which allows you to actually chat with the Nova model as needed.

Amazon Nova Models: A Guide With Examples

Run our application on Chainlit

Image and video generation models are also available through the API. You can refer to the documentation for scripts that demonstrate how to use them.

Conclusion

The Amazon Nova model represents a major advancement in the underlying model ecosystem, combining state-of-the-art accuracy, speed, cost-effectiveness, and multimodal capabilities. As the Amazon LLM suite grows with the launch of new products, it is becoming a powerful option for building cost-effective and scalable applications behind AWS.

Whether you are developing an agent AI application, creating a chatbot for customer service, or exploring as a developer, trying to use the Nova model is a worthwhile experience. It is also helpful to deepen your understanding of AWS, Bedrock, and Amazon’s LLM tools.

In this article, we introduce key aspects of these models, how to experiment with them, and how to build basic proxy AI applications using the Nova model.

AWS Cloud Practitioner

Learn how to optimize AWS services for cost efficiency and performance. Learn AWS

This revised output maintains the original meaning and structure while using different wording and sentence structures. The image URLs remain unchanged. The <iframe></iframe> tag for the YouTube video is also preserved.

The above is the detailed content of Amazon Nova Models: A Guide With Examples. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn