


Let's explore a way to do OCR LLM analysis for an image. Will this be the best way given by an expert with decades of experience? Not really. But it comes from someone who takes a similar approach in real life. Think of this as a weekend project version with practical snippets rather than production-ready code. Let's dig in!
What's our goal here?
We're going to build a simple pipeline that can take an image (or PDF), extract text from it using OCR, and then analyze that text using an LLM to get useful metadata. This could be handy for automatically categorizing documents, analyzing incoming correspondence, or building a smart document management system. We'll do it using some popular open-source tools and keep things relatively straightforward.
And yeah, everything below assumes you're already pretty comfortable with HF transformers. If not, check out https://huggingface.co/docs/transformers/en/quicktour - seems like a solid place to start. Though I never did and just learned from examples. I'll get to it... eventually.
What packages do we need?
We'll use torch and transformers for the heavy lifting, plus pymupdf and rich to make our lives easier with some user-friendly console output (I like the rich, so basically we're using it for fun).
import json import time import fitz import torch from transformers import AutoModel, AutoTokenizer, pipeline from rich.console import Console console = Console()
Prepare the image
First off, what image should we use as input? Since we're using Hugging Face here for the primary job, let's use the first page of their leading web page as our test subject. It's a good candidate with both text and complicated formatting - perfect for putting our OCR through its paces.
For a more realistic solution, let's assume our input is a PDF (because let's face it, that's what you'll probably deal with in the real world). We'll need to convert it to PNG format for our model to process:
INPUT_PDF_FILE = "./data/ocr_hf_main_page.pdf" OUTPUT_PNG_FILE = "./data/ocr_hf_main_page.png" doc = fitz.open(INPUT_PDF_FILE) page = doc.load_page(0) pixmap = page.get_pixmap(dpi=300) img = pixmap.tobytes() with console.status("Converting PDF to PNG...", spinner="monkey"): with open(OUTPUT_PNG_FILE, "wb") as f: f.write(img)
Do the real OCR here
I've played around with various OCR solutions for this task. Sure, there's tesseract and plenty of other options out there. But for my test case, I got the best results with GOT-OCR2_0 (https://huggingface.co/stepfun-ai/GOT-OCR2_0). So let's jump right in with that:
tokenizer = AutoTokenizer.from_pretrained( "ucaslcl/GOT-OCR2_0", device_map="cuda", trust_remote_code=True, ) model = AutoModel.from_pretrained( "ucaslcl/GOT-OCR2_0", trust_remote_code=True, low_cpu_mem_usage=True, use_safetensors=True, pad_token_id=tokenizer.eos_token_id, ) model = model.eval().cuda()
What's going on here? Well, default AutoModel and AutoTokenizer, the only special enough part is we're setting up the model to use cuda. And this isn't optional. The model requires CUDA support to run.
Now that we've defined our model, let's actually put it to work on our saved file. Also, we will measure the time and print it. Useful not only to compare with different models, but also to understand if it's even feasible for your use case to wait so long (although it's very quick for our case):
import json import time import fitz import torch from transformers import AutoModel, AutoTokenizer, pipeline from rich.console import Console console = Console()
And here's what we get from our original image:
INPUT_PDF_FILE = "./data/ocr_hf_main_page.pdf" OUTPUT_PNG_FILE = "./data/ocr_hf_main_page.png" doc = fitz.open(INPUT_PDF_FILE) page = doc.load_page(0) pixmap = page.get_pixmap(dpi=300) img = pixmap.tobytes() with console.status("Converting PDF to PNG...", spinner="monkey"): with open(OUTPUT_PNG_FILE, "wb") as f: f.write(img)
^ all the text, no formatting, but it's intentional.
GOT-OCR2_0 is pretty flexible - it can output in different formats, including HTML. Here are some other ways you can use it:
tokenizer = AutoTokenizer.from_pretrained( "ucaslcl/GOT-OCR2_0", device_map="cuda", trust_remote_code=True, ) model = AutoModel.from_pretrained( "ucaslcl/GOT-OCR2_0", trust_remote_code=True, low_cpu_mem_usage=True, use_safetensors=True, pad_token_id=tokenizer.eos_token_id, ) model = model.eval().cuda()
Finally try LLM
Now comes the fun part - picking an LLM. There's been endless discussion about which one's best, with articles everywhere you look. But let's keep it simple: what's the LLM everyone and their dog has heard of? Llama. So we'll use Llama-3.2-1B to process out text.
What can we get from the text? Think basic stuff like text classification, sentiment analysis, language detection, etc. Imagine you're building a system to automatically categorize uploaded documents or sort incoming faxes for a pharmacy.
I'll skip the deep dive into prompt engineering (that's a whole other article and I don't believe I will be writing any), but here's the basic idea:
def run_ocr_for_file(func: callable, text: str): start_time = time.time() res = func() final_time = time.time() - start_time console.rule(f"[bold red] {text} [/bold red]") console.print(res) console.rule(f"Time: {final_time} seconds") return res result_text = None with console.status( "Running OCR for the result file...", spinner="monkey", ): result_text = run_ocr_for_file( lambda: model.chat( tokenizer, OUTPUT_PNG_FILE, ocr_type="ocr", ), "plain texts OCR", )
By the way, am I doing something hilariously stupid here with prompt/content? Let me know. Pretty new to the "prompt engineering" and do not take it seriously enough yet.
The model sometimes wraps the result in markdown code blocks, so we need to handle that (if anyone knows a cleaner way, I'm all ears):
Hugging Face- The Al community building the future. https: / / hugging face. co/ Search models, datasets, users. . . Following 0 All Models Datasets Spaces Papers Collections Community Posts Up votes Likes New Follow your favorite Al creators Refresh List black- forest- labs· Advancing state- of- the- art image generation Follow stability a i· Sharing open- source image generation models Follow bria a i· Specializing in advanced image editing models Follow Trending last 7 days All Models Datasets Spaces deep see k- a i/ Deep Seek- V 3 Updated 3 days ago· 40 k· 877 deep see k- a i/ Deep Seek- V 3- Base Updated 3 days ago· 6.34 k· 1.06 k 2.39 k TRELLIS Q wen/ QV Q- 72 B- Preview 88888888888888888888 888888888888888888 301 Gemini Co der 1 of 3 2025-01-01,9:38 p. m
And here's what we typically get as output:
# format texts OCR: result_text = model.chat( tokenizer, image_file, ocr_type='format', ) # fine-grained OCR: result_text = model.chat( tokenizer, image_file, ocr_type='ocr', ocr_box='', ) # ... ocr_type='format', ocr_box='') # ... ocr_type='ocr', ocr_color='') # ... ocr_type='format', ocr_color='') # multi-crop OCR: # ... ocr_type='ocr') # ... ocr_type='format') # render the formatted OCR results: result_text = model.chat( tokenizer, image_file, ocr_type='format', render=True, save_render_file = './demo.html', )
To sum up
We've built a little pipeline that can take a PDF, extract its text using some pretty good OCR, and then analyze that text using an LLM to get useful metadata. Is it production-ready? Probably not. But it's a solid starting point if you're looking to build something similar. The cool thing is how we combined different open-source tools to create something useful - from PDF handling to OCR to LLM analysis.
You can easily extend this. Maybe add better error handling, support for multiple pages, or try different LLMs. Or maybe hook it up to a document management system. Hope you will. It might be a fun task.
Remember, this is just one way to do it - there are probably dozens of other approaches that might work better for your specific use case. But hopefully, this gives you a good starting point for your own experiments! Or a perfect place to teach me in the comments how it's done.
The above is the detailed content of Quick and Dirty Document Analysis: Combining GOT-OCR and LLama in Python. For more information, please follow other related articles on the PHP Chinese website!

Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

Python and C have significant differences in memory management and control. 1. Python uses automatic memory management, based on reference counting and garbage collection, simplifying the work of programmers. 2.C requires manual management of memory, providing more control but increasing complexity and error risk. Which language to choose should be based on project requirements and team technology stack.

Python's applications in scientific computing include data analysis, machine learning, numerical simulation and visualization. 1.Numpy provides efficient multi-dimensional arrays and mathematical functions. 2. SciPy extends Numpy functionality and provides optimization and linear algebra tools. 3. Pandas is used for data processing and analysis. 4.Matplotlib is used to generate various graphs and visual results.

Whether to choose Python or C depends on project requirements: 1) Python is suitable for rapid development, data science, and scripting because of its concise syntax and rich libraries; 2) C is suitable for scenarios that require high performance and underlying control, such as system programming and game development, because of its compilation and manual memory management.

Python is widely used in data science and machine learning, mainly relying on its simplicity and a powerful library ecosystem. 1) Pandas is used for data processing and analysis, 2) Numpy provides efficient numerical calculations, and 3) Scikit-learn is used for machine learning model construction and optimization, these libraries make Python an ideal tool for data science and machine learning.

Is it enough to learn Python for two hours a day? It depends on your goals and learning methods. 1) Develop a clear learning plan, 2) Select appropriate learning resources and methods, 3) Practice and review and consolidate hands-on practice and review and consolidate, and you can gradually master the basic knowledge and advanced functions of Python during this period.

Key applications of Python in web development include the use of Django and Flask frameworks, API development, data analysis and visualization, machine learning and AI, and performance optimization. 1. Django and Flask framework: Django is suitable for rapid development of complex applications, and Flask is suitable for small or highly customized projects. 2. API development: Use Flask or DjangoRESTFramework to build RESTfulAPI. 3. Data analysis and visualization: Use Python to process data and display it through the web interface. 4. Machine Learning and AI: Python is used to build intelligent web applications. 5. Performance optimization: optimized through asynchronous programming, caching and code

Python is better than C in development efficiency, but C is higher in execution performance. 1. Python's concise syntax and rich libraries improve development efficiency. 2.C's compilation-type characteristics and hardware control improve execution performance. When making a choice, you need to weigh the development speed and execution efficiency based on project needs.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

Zend Studio 13.0.1
Powerful PHP integrated development environment

SublimeText3 English version
Recommended: Win version, supports code prompts!

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool