


The content of this article is about how Python can crawl tabular data from PDF files (code examples). It has certain reference value. Friends in need can refer to it. I hope it will be helpful to you. .
This article will show a slightly different crawler.
In the past, our crawlers crawled data from the Internet, because web pages are generally written in HTML, CSS, and JavaScript codes. Therefore, there are a large number of mature technologies to crawl various data in web pages. This time, the documents we need to crawl are PDF files. This article will show how to use Python's camelot module to crawl tabular data from PDF files.
In our daily life and work, PDF files are undoubtedly one of the most commonly used file formats. We can all see this file format, ranging from textbooks and courseware to contracts and planning documents. But how to extract tables from PDF files is a big problem. Because there is no internal representation in PDF to represent a table. This makes tabular data difficult to extract for analysis. So, how do we crawl table data from PDF?
The answer is Python’s camelot module!
camelot is a module for Python that allows anyone to easily extract tabular data from PDF files. You can use the following command to install the camelot module (it takes a long time to install):
pip install camelot-py
The official documentation address of the camelot module is: https://camelot-py.readthedoc....
The following will show how to use the camelot module to crawl tabular data from PDF files.
Example 1
First, let us look at a simple example: eg.pdf. The entire file has only one page, and there is only one table in this page, as follows:
Use the following Python code to extract the table in the PDF file:
import camelot # 从PDF文件中提取表格 tables = camelot.read_pdf('E://eg.pdf', pages='1', flavor='stream') # 表格信息 print(tables) print(tables[0]) # 表格数据 print(tables[0].data)
The output result is:
<tablelist> <table> [['ID', '姓名', '城市', '性别'], ['1', 'Alex', 'Shanghai', 'M'], ['2', 'Bob', 'Beijing', 'F'], ['3', 'Cook', 'New York', 'M']]<p>Analysis code, camelot.read_pdf() is camelot's function to extract data from a table. The input parameters are the path of the PDF file, the page number (pages) and the table parsing method (there are two methods, stream and lattice). For the table parsing method, the default method is lattice, and the stream method will parse the entire PDF page as a table by default. If you need to specify the area in the parsed page, you can use the table_area parameter. <br> The convenience of the camelot module is that it provides functions to directly convert extracted table data into pandas, csv, JSON, and html, such as tables[0].df, tables[0].to_csv() function wait. Let’s take the output csv file as an example: </p> <pre class="brush:php;toolbar:false">import camelot # 从PDF文件中提取表格 tables = camelot.read_pdf('E://eg.pdf', pages='1', flavor='stream') # 将表格数据转化为csv文件 tables[0].to_csv('E://eg.csv')
The obtained csv file is as follows:
Example 2
In Example 2, we will extract table data in a certain area of the PDF page. The pages (parts) of the PDF file are as follows:
In order to extract the only table in the entire page, we need to locate the location of the table. The coordinate system of the PDF file is different from that of the picture. It takes the vertex of the lower left corner as the origin, the x-axis to the right, and the y-axis upward. The coordinates of the text on the entire page can be output through the following Python code:
import camelot # 从PDF中提取表格 tables = camelot.read_pdf('G://Statistics-Fundamentals-Succinctly.pdf', pages='53', \ flavor='stream') # 绘制PDF文档的坐标,定位表格所在的位置 tables[0].plot('text')
The output result is:
UserWarning: No tables found on page-53 [stream.py:292]
The entire code does not find the table. This is because the stream method treats the entire PDF page as a table by default, so the table is not found. But the image of the drawn page coordinates is as follows:
Carefully comparing the previous PDF pages, we can easily find that the coordinates of the upper left corner of the area corresponding to the table is (50,620), and the coordinates of the lower right corner are (500,540). We add the table_area parameter to the read_pdf() function. The complete Python code is as follows:
import camelot # 识别指定区域中的表格数据 tables = camelot.read_pdf('G://Statistics-Fundamentals-Succinctly.pdf', pages='53', \ flavor='stream', table_area=['50,620,500,540']) # 绘制PDF文档的坐标,定位表格所在的位置 table_df = tables[0].df print(type(table_df)) print(table_df.head(n=6))
The output result is:
<class> 0 1 2 3 0 Student Pre-test score Post-test score Difference 1 1 70 73 3 2 2 64 65 1 3 3 69 63 -6 4 … … … … 5 34 82 88 6</class>
Summary
In specific identification of PDF pages When creating a table, in addition to the parameter of specifying the area, there are also parameters such as superscript and subscript, cell merging, etc. For detailed usage, please refer to the camelot official document website: https://camelot-py.readthedoc....
The above is the detailed content of How to crawl tabular data from PDF files in Python (code example). For more information, please follow other related articles on the PHP Chinese website!

There are many methods to connect two lists in Python: 1. Use operators, which are simple but inefficient in large lists; 2. Use extend method, which is efficient but will modify the original list; 3. Use the = operator, which is both efficient and readable; 4. Use itertools.chain function, which is memory efficient but requires additional import; 5. Use list parsing, which is elegant but may be too complex. The selection method should be based on the code context and requirements.

There are many ways to merge Python lists: 1. Use operators, which are simple but not memory efficient for large lists; 2. Use extend method, which is efficient but will modify the original list; 3. Use itertools.chain, which is suitable for large data sets; 4. Use * operator, merge small to medium-sized lists in one line of code; 5. Use numpy.concatenate, which is suitable for large data sets and scenarios with high performance requirements; 6. Use append method, which is suitable for small lists but is inefficient. When selecting a method, you need to consider the list size and application scenarios.

Compiledlanguagesofferspeedandsecurity,whileinterpretedlanguagesprovideeaseofuseandportability.1)CompiledlanguageslikeC arefasterandsecurebuthavelongerdevelopmentcyclesandplatformdependency.2)InterpretedlanguageslikePythonareeasiertouseandmoreportab

In Python, a for loop is used to traverse iterable objects, and a while loop is used to perform operations repeatedly when the condition is satisfied. 1) For loop example: traverse the list and print the elements. 2) While loop example: guess the number game until you guess it right. Mastering cycle principles and optimization techniques can improve code efficiency and reliability.

To concatenate a list into a string, using the join() method in Python is the best choice. 1) Use the join() method to concatenate the list elements into a string, such as ''.join(my_list). 2) For a list containing numbers, convert map(str, numbers) into a string before concatenating. 3) You can use generator expressions for complex formatting, such as ','.join(f'({fruit})'forfruitinfruits). 4) When processing mixed data types, use map(str, mixed_list) to ensure that all elements can be converted into strings. 5) For large lists, use ''.join(large_li

Pythonusesahybridapproach,combiningcompilationtobytecodeandinterpretation.1)Codeiscompiledtoplatform-independentbytecode.2)BytecodeisinterpretedbythePythonVirtualMachine,enhancingefficiencyandportability.

ThekeydifferencesbetweenPython's"for"and"while"loopsare:1)"For"loopsareidealforiteratingoversequencesorknowniterations,while2)"while"loopsarebetterforcontinuinguntilaconditionismetwithoutpredefinediterations.Un

In Python, you can connect lists and manage duplicate elements through a variety of methods: 1) Use operators or extend() to retain all duplicate elements; 2) Convert to sets and then return to lists to remove all duplicate elements, but the original order will be lost; 3) Use loops or list comprehensions to combine sets to remove duplicate elements and maintain the original order.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SublimeText3 Mac version
God-level code editing software (SublimeText3)

SublimeText3 English version
Recommended: Win version, supports code prompts!

SublimeText3 Linux new version
SublimeText3 Linux latest version
