Home >Technology peripherals >AI >How to Use DeepSeek Janus-Pro Locally
DeepSeek, a Chinese AI innovator, has significantly impacted the global AI landscape, causing a $1 trillion decline in US stock market valuations and unsettling tech giants like Nvidia and OpenAI. Its rapid rise to prominence is due to its leading-edge text generation, reasoning, vision, and image generation models. A recent highlight is the launch of its cutting-edge Janus series of multimodal models. This tutorial details setting up a local Docker container to run the Janus model and explore its capabilities.
Image by Author
This guide covers setting up a Janus project, building a Docker container for local execution, and testing its image and text processing capabilities. Further exploration of DeepSeek's disruptive models is available via these resources:
The DeepSeek Janus Series represents a new generation of multimodal models, designed to seamlessly integrate visual comprehension and generation using advanced frameworks. The series comprises Janus, JanusFlow, and the high-performance Janus-Pro, each iteration improving efficiency, performance, and multimodal functionality.
Janus employs a novel autoregressive framework, separating visual encoding into distinct pathways for understanding and generation while leveraging a unified transformer architecture. This design resolves inherent conflicts between these functions, boosting flexibility and efficiency. Janus's performance rivals or surpasses specialized models, making it a prime candidate for future multimodal systems.
JanusFlow integrates autoregressive language modeling with rectified flow, a leading generative modeling technique. Its streamlined design simplifies training within large language model frameworks, eliminating complex modifications. Benchmark results show JanusFlow outperforming both specialized and unified approaches, advancing the state-of-the-art in vision-language modeling.
Janus-Pro builds upon its predecessors by incorporating optimized training methods, expanded datasets, and larger model sizes. These enhancements significantly improve multimodal understanding, text-to-image instruction following, and the stability of text-to-image generation.
Source: deepseek-ai/Janus
For a deeper dive into the Janus series, access methods, and comparisons with OpenAI's DALL-E 3, see DeepSeek's Janus-Pro: Features, DALL-E 3 Comparison & More.
While Janus is a relatively new model, lacking readily available quantized versions or local applications for easy desktop/laptop use, its GitHub repository offers a Gradio web application demo. However, this demo frequently encounters package conflicts. This project addresses this by modifying the code, building a custom Docker image, and running it locally using Docker Desktop.
Begin by downloading and installing the latest Docker Desktop version from the official Docker website.
Windows Users: Windows users will also need the Windows Subsystem for Linux (WSL). Install it via your terminal with:
<code>wsl --install</code>
Clone the Janus repository and navigate to the project directory:
<code>git clone https://github.com/deepseek-ai/Janus.git cd Janus</code>
In the demo
folder, open app_januspro.py
. Make these changes:
deepseek-ai/Janus-Pro-7B
with deepseek-ai/Janus-Pro-1B
. This uses the smaller (4.1 GB) model, better suited for local use.demo.queue
Function: Modify the last line to:<code>demo.queue(concurrency_count=1, max_size=10).launch( server_name="0.0.0.0", server_port=7860 )</code>
This ensures Docker URL and port compatibility.
Create a Dockerfile
in the project's root directory with this content:
<code># Use the PyTorch base image FROM pytorch/pytorch:latest # Set the working directory inside the container WORKDIR /app # Copy the current directory into the container COPY . /app # Install necessary Python packages RUN pip install -e .[gradio] # Set the entrypoint for the container to launch your Gradio app CMD ["python", "demo/app_januspro.py"]</code>
This Dockerfile will:
After creating the Dockerfile
, build and run the Docker image. Consider taking an Introduction to Docker course for foundational knowledge.
Build the image using:
<code>docker build -t janus .</code>
(This may take 10-15 minutes depending on your internet connection.)
Start the container with GPU support, port mapping, and persistent storage:
<code>docker run -it -p 7860:7860 -d -v huggingface:/root/.cache/huggingface -w /app --gpus all --name janus janus:latest</code>
Monitor progress in the Docker Desktop application's "Containers" and "Logs" tabs. The model download from Hugging Face Hub will be visible in the logs.
Access the application at: http://localhost:7860/
. For troubleshooting, refer to the updated Janus project at kingabzpro/Janus: Janus-Series
.
The web app provides a user-friendly interface. This section demonstrates Janus Pro's multimodal understanding and text-to-image generation.
To test multimodal understanding, upload an image and request an explanation. Even with the smaller 1B model, the results are highly accurate.
Similarly, testing with an infographic demonstrates accurate summarization of textual content within the image.
The "Text-to-Image Generation" section allows for testing with custom prompts. The model generates five variations, which may take several minutes.
The generated images are comparable in quality and detail to Stable Diffusion XL. A more complex prompt is also tested below, demonstrating the model's ability to handle intricate descriptions.
Prompt Example: (Detailed description of an eye with ornate surroundings)
For comprehensive testing, DeepSeek's Hugging Face Spaces deployment (Chat With Janus-Pro-7B
) provides access to the full model capabilities. The Janus Pro model's accuracy, even with smaller variants, is noteworthy.
This tutorial detailed Janus Pro's multimodal capabilities and provided instructions for setting up a local, efficient solution for private use. Further learning is available via our guide on Fine-Tuning DeepSeek R1 (Reasoning Model).
The above is the detailed content of How to Use DeepSeek Janus-Pro Locally. For more information, please follow other related articles on the PHP Chinese website!