


The content of this article is about how Python can crawl tabular data from PDF files (code examples). It has certain reference value. Friends in need can refer to it. I hope it will be helpful to you. .
This article will show a slightly different crawler.
In the past, our crawlers crawled data from the Internet, because web pages are generally written in HTML, CSS, and JavaScript codes. Therefore, there are a large number of mature technologies to crawl various data in web pages. This time, the documents we need to crawl are PDF files. This article will show how to use Python's camelot module to crawl tabular data from PDF files.
In our daily life and work, PDF files are undoubtedly one of the most commonly used file formats. We can all see this file format, ranging from textbooks and courseware to contracts and planning documents. But how to extract tables from PDF files is a big problem. Because there is no internal representation in PDF to represent a table. This makes tabular data difficult to extract for analysis. So, how do we crawl table data from PDF?
The answer is Python’s camelot module!
camelot is a module for Python that allows anyone to easily extract tabular data from PDF files. You can use the following command to install the camelot module (it takes a long time to install):
pip install camelot-py
The official documentation address of the camelot module is: https://camelot-py.readthedoc....
The following will show how to use the camelot module to crawl tabular data from PDF files.
Example 1
First, let us look at a simple example: eg.pdf. The entire file has only one page, and there is only one table in this page, as follows:
Use the following Python code to extract the table in the PDF file:
import camelot # 从PDF文件中提取表格 tables = camelot.read_pdf('E://eg.pdf', pages='1', flavor='stream') # 表格信息 print(tables) print(tables[0]) # 表格数据 print(tables[0].data)
The output result is:
<tablelist> <table> [['ID', '姓名', '城市', '性别'], ['1', 'Alex', 'Shanghai', 'M'], ['2', 'Bob', 'Beijing', 'F'], ['3', 'Cook', 'New York', 'M']]<p>Analysis code, camelot.read_pdf() is camelot's function to extract data from a table. The input parameters are the path of the PDF file, the page number (pages) and the table parsing method (there are two methods, stream and lattice). For the table parsing method, the default method is lattice, and the stream method will parse the entire PDF page as a table by default. If you need to specify the area in the parsed page, you can use the table_area parameter. <br> The convenience of the camelot module is that it provides functions to directly convert extracted table data into pandas, csv, JSON, and html, such as tables[0].df, tables[0].to_csv() function wait. Let’s take the output csv file as an example: </p> <pre class="brush:php;toolbar:false">import camelot # 从PDF文件中提取表格 tables = camelot.read_pdf('E://eg.pdf', pages='1', flavor='stream') # 将表格数据转化为csv文件 tables[0].to_csv('E://eg.csv')
The obtained csv file is as follows:
Example 2
In Example 2, we will extract table data in a certain area of the PDF page. The pages (parts) of the PDF file are as follows:
In order to extract the only table in the entire page, we need to locate the location of the table. The coordinate system of the PDF file is different from that of the picture. It takes the vertex of the lower left corner as the origin, the x-axis to the right, and the y-axis upward. The coordinates of the text on the entire page can be output through the following Python code:
import camelot # 从PDF中提取表格 tables = camelot.read_pdf('G://Statistics-Fundamentals-Succinctly.pdf', pages='53', \ flavor='stream') # 绘制PDF文档的坐标,定位表格所在的位置 tables[0].plot('text')
The output result is:
UserWarning: No tables found on page-53 [stream.py:292]
The entire code does not find the table. This is because the stream method treats the entire PDF page as a table by default, so the table is not found. But the image of the drawn page coordinates is as follows:
Carefully comparing the previous PDF pages, we can easily find that the coordinates of the upper left corner of the area corresponding to the table is (50,620), and the coordinates of the lower right corner are (500,540). We add the table_area parameter to the read_pdf() function. The complete Python code is as follows:
import camelot # 识别指定区域中的表格数据 tables = camelot.read_pdf('G://Statistics-Fundamentals-Succinctly.pdf', pages='53', \ flavor='stream', table_area=['50,620,500,540']) # 绘制PDF文档的坐标,定位表格所在的位置 table_df = tables[0].df print(type(table_df)) print(table_df.head(n=6))
The output result is:
<class> 0 1 2 3 0 Student Pre-test score Post-test score Difference 1 1 70 73 3 2 2 64 65 1 3 3 69 63 -6 4 … … … … 5 34 82 88 6</class>
Summary
In specific identification of PDF pages When creating a table, in addition to the parameter of specifying the area, there are also parameters such as superscript and subscript, cell merging, etc. For detailed usage, please refer to the camelot official document website: https://camelot-py.readthedoc....
The above is the detailed content of How to crawl tabular data from PDF files in Python (code example). For more information, please follow other related articles on the PHP Chinese website!

To maximize the efficiency of learning Python in a limited time, you can use Python's datetime, time, and schedule modules. 1. The datetime module is used to record and plan learning time. 2. The time module helps to set study and rest time. 3. The schedule module automatically arranges weekly learning tasks.

Python excels in gaming and GUI development. 1) Game development uses Pygame, providing drawing, audio and other functions, which are suitable for creating 2D games. 2) GUI development can choose Tkinter or PyQt. Tkinter is simple and easy to use, PyQt has rich functions and is suitable for professional development.

Python is suitable for data science, web development and automation tasks, while C is suitable for system programming, game development and embedded systems. Python is known for its simplicity and powerful ecosystem, while C is known for its high performance and underlying control capabilities.

You can learn basic programming concepts and skills of Python within 2 hours. 1. Learn variables and data types, 2. Master control flow (conditional statements and loops), 3. Understand the definition and use of functions, 4. Quickly get started with Python programming through simple examples and code snippets.

Python is widely used in the fields of web development, data science, machine learning, automation and scripting. 1) In web development, Django and Flask frameworks simplify the development process. 2) In the fields of data science and machine learning, NumPy, Pandas, Scikit-learn and TensorFlow libraries provide strong support. 3) In terms of automation and scripting, Python is suitable for tasks such as automated testing and system management.

You can learn the basics of Python within two hours. 1. Learn variables and data types, 2. Master control structures such as if statements and loops, 3. Understand the definition and use of functions. These will help you start writing simple Python programs.

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Dreamweaver Mac version
Visual web development tools

SublimeText3 English version
Recommended: Win version, supports code prompts!

Notepad++7.3.1
Easy-to-use and free code editor

Atom editor mac version download
The most popular open source editor

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.