Home >Technology peripherals >AI >Llama 3.2 90B Tutorial: Image Captioning App With Streamlit & Groq

Llama 3.2 90B Tutorial: Image Captioning App With Streamlit & Groq

Lisa Kudrow
Lisa KudrowOriginal
2025-03-01 10:28:09581browse

Meta has finally added multimodality to the Llama ecosystem by introducing the Llama 3.2 11B & 90B vision models. These two models excel at processing both text and images, which led me to try building a project using the 90B version.

In this article, I’ll share my work and guide you through building an interactive image captioning app using Streamlit for the front end and Llama 3.2 90B as the engine for generating captions.

Why to Use Llama 3.2 90B for an Image Captioning App

Llama 3.2-Vision 90B is a state-of-the-art multimodal large language model (LLM) built for tasks involving both image and text inputs.

It stands out with its ability to tackle complex tasks like visual reasoning, image recognition, and image captioning. It has been trained on a massive dataset of 6 billion image-text pairs.

Llama 3.2-Vision is a great choice for our app because it supports multiple languages for text tasks, though English is its primary focus for image-related applications. Its key features make it an excellent choice for tasks such as Visual Question Answering (VQA), Document VQA, and image-text retrieval, with image captioning being one of its standout applications.

Let’s explore how these capabilities translate into a real-world application like image captioning.

Image Captioning Pipeline

Image captioning is the automated process of generating descriptive text that summarizes an image's content. It combines computer vision and natural language processing to interpret and express visual details in language.

Traditionally, image captioning has required a complex pipeline, often involving separate stages for image processing and language generation. The standard approach involves three main steps: image preprocessing, feature extraction, and caption generation.

  1. Image preprocessing: Images are typically resized, normalized, and occasionally cropped to ensure they meet the model’s input specifications.
  2. Feature extraction: Visual features are extracted to identify objects, scenes, or relevant details within the image. In most models, this requires a separate vision model to interpret the image, generating structured data that language models can understand.
  3. Caption generation: These extracted features are then used by a language model to craft a coherent description, combining the objects, context, and relationships identified in the visual data.

With Llama 3.2 90B, this traditionally intricate process becomes more simple. The model's vision adapter integrates visual features into the core language model, enabling it to interpret images directly and generate captions through simple prompts.

By embedding cross-attention layers within its architecture, Llama 3.2 90B allows users to describe an image by merely prompting the model—eliminating the need for separate stages of processing. This simplicity enables more accessible and efficient image captioning, where a single prompt can yield a natural, descriptive caption that effectively captures an image's essence.

Overview of the Image Captioning App

To bring the power of Llama 3.2 90B to life, we’ll build a simple yet effective image captioning application using Streamlit for the front end and Groq for generating captions.

The app will allow users to upload an image and receive a descriptive caption generated by the model with just two clicks. This setup is user-friendly and requires minimal coding knowledge to get started.

Our application will include the following features:

  1. Title: A prominently displayed title, Llama Captioner, to establish the app's purpose.
  2. Upload button: An interface to upload images from the user’s device.
  3. Generate button: A button to initiate the caption generation process.
  4. Caption output: The app will display the generated caption directly on the interface.

Code Implementation for our Llama 3.2 90B App

The Groq API will act as the bridge between the user’s uploaded image and the Llama 3.2-Vision model. If you want to follow along and code with me, make sure you first:

  1. Obtain your Groq API key by signing up at Groq Console.
  2. Save your API key in a credentials.json file to simplify access.
  3. Follow Groq’s quickstart guide for installation and configuration.

This Python code snippet below sets up a Streamlit application to interact with the Groq API. It includes:

  1. Imports libraries for web app development (Streamlit), AI interactions (Groq), image handling (base64), and file operations (os, json).
  2. Reads the Groq API key from a separate JSON file for enhanced security.
  3. Defines a function to encode images into base64 format for efficient transmission and processing.
import streamlit as st
from groq import Groq
import base64
import os
import json

# Set up Groq API Key
os.environ['GROQ_API_KEY'] = json.load(open('credentials.json', 'r'))['groq_token']

# Function to encode the image
def encode_image(image_path):
   with open(image_path, "rb") as image_file:
       return base64.b64encode(image_file.read()).decode('utf-8')

We move on by writing the function below, which is designed to generate a textual description of an uploaded image using the Groq API. Here's a breakdown of its functionality:

  1. Image encoding: The uploaded image is converted into a base64 encoded string. This format allows the image data to be easily transmitted within the API request.
  2. Groq API interaction: A Groq client is instantiated to facilitate communication with the Groq service. A chat completion request is formulated, comprising:
  • A user prompt: "What's in this image?"
  • The base64 encoded image data, embedded within a data URI. The llama-3.2-90b-vision-preview model is specified to process the image and generate a textual description.
  1. Caption extraction: The generated caption is extracted from the Groq API response. The first choice's message content, which contains the caption, is returned.
import streamlit as st
from groq import Groq
import base64
import os
import json

# Set up Groq API Key
os.environ['GROQ_API_KEY'] = json.load(open('credentials.json', 'r'))['groq_token']

# Function to encode the image
def encode_image(image_path):
   with open(image_path, "rb") as image_file:
       return base64.b64encode(image_file.read()).decode('utf-8')

Finally, we generate the our interactive web app through Streamlit:

# Function to generate caption
def generate_caption(uploaded_image):
   base64_image = base64.b64encode(uploaded_image.read()).decode('utf-8')
   client = Groq()
   chat_completion = client.chat.completions.create(
       messages=[
           {
               "role": "user",
               "content": [
                   {"type": "text", "text": "What's in this image?"},
                   {
                       "type": "image_url",
                       "image_url": {
                           "url": f"data:image/jpeg;base64,{base64_image}",
                       },
                   },
               ],
           }
       ],
       model="llama-3.2-90b-vision-preview",
   )
   return chat_completion.choices[0].message.content

The Final Streamlit App: Llama Captioner

This Streamlit application provides a user-friendly interface for image captioning. Here's a breakdown of its functionality:

  1. Title and file uploader:
  • The app displays a title: "Llama Captioner".
  • A file uploader component allows users to select an image file (JPG, JPEG, or PNG).
  1. Image display:
  • Once an image is uploaded, the app displays it using the st.image function.
  1. Caption generation:
  • A button, "Generate Caption," triggers the caption generation process.
  • When clicked, a spinner indicates that the caption is being generated.
  • The generate_caption function is called to process the uploaded image and obtain a caption.
  • Upon successful generation, a success message is displayed, followed by the generated caption.

The below snippet is code in action where an image of Eddie Hall was uploaded to generate the caption. Surprisingly it extracted even the information that was not clearly visible like “Strongest Man” etc.

Llama 3.2 90B Tutorial: Image Captioning App With Streamlit & Groq

Conclusion

Building an image captioning app with Llama 3.2 90B and Streamlit shows how advanced AI can make tough tasks easier. This project combines a powerful model with a simple interface to create a tool that's both intuitive and easy to use.

As an AI Engineer, I see huge potential in tools like these. They can make technology more accessible, help people engage better with content, and automate processes in smarter ways.

To continue your learning on Llama, I recommend the following resources:

  • How to Run Llama 3.2 1B on an Android Phone With Torchchat
  • Llama 3.2 and Gradio Tutorial: Build a Multimodal Web App
  • Llama Stack: A Guide With Practical Examples
  • Fine-tuning Llama 3.2 and Using It Locally: A Step-by-Step Guide
  • Llama 3.3: Step-by-Step Tutorial With Demo Project

The above is the detailed content of Llama 3.2 90B Tutorial: Image Captioning App With Streamlit & Groq. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn