Home >Backend Development >Python Tutorial ># | Automate PDF data extraction: Build

# | Automate PDF data extraction: Build

Barbara Streisand
Barbara StreisandOriginal
2024-12-15 11:06:11646browse

Overview

I wrote a Python script that translates the PDF data extraction business logic into working code.

The script was tested on 71 pages of Custodian Statement PDFs covering a 10 month period (Jan to Oct 2024). Processing the PDFs took about 4 seconds to complete - significantly quicker than doing it manually.

# | Automate PDF data extraction: Build

From what I see, the output looks correct and the code did not run into any errors.

Snapshots of the three CSV outputs are shown below. Note that sensitive data has been greyed out.

Snapshot 1: Stock Holdings

# | Automate PDF data extraction: Build

Snapshot 2: Fund Holdings

# | Automate PDF data extraction: Build

Snapshot 3: Cash Holdings

# | Automate PDF data extraction: Build

This workflow shows the broad steps I took to generate the CSV files.

# | Automate PDF data extraction: Build

Now, I will elaborate in more detail how I translated the business logic to code in Python.

Step 1: Read PDF documents

I used pdfplumber's open() function.

# Open the PDF file
with pdfplumber.open(file_path) as pdf:

file_path is a declared variable that tells pdfplumber which file to open.

Step 2.0: Extract & filter tables from each page

The extract_tables() function does the hard work of extracting all tables from each page.

Though I am not really familiar with the underlying logic, I think the function did a pretty good job. For example, the two snapshots below show the extracted table vs. the original (from the PDF)

Snapshot A: Output from VS Code Terminal

# | Automate PDF data extraction: Build

Snapshot B: Table in PDF

# | Automate PDF data extraction: Build

I then needed to uniquely label each table, so that I could "pick and choose" data from specific tables later on.

The ideal option was to use each table's title. However, determining the title coordinates were beyond my capabilities.

As a workaround, I identified each table by concatenating the headers of the first three columns. For example, the Stock Holdings table in Snapshot B is labeled Stocks/ETFsnNameExchangeQuantity.

⚠️This approach has a serious drawback - the first three header names do not make all tables sufficiently unique. Fortunately, this only impacts irrelevant tables.

Step 2.1: Extract, filter & transform non-table text

The specific values I needed - Account Number and Statement Date - were sub-strings in Page 1 of each PDF.

For example, "Account Number M1234567" contains account number "M1234567".

# | Automate PDF data extraction: Build

I used Python's re library and got ChatGPT to suggest suitable regular expressions ("regex"). The regex breaks up each string into two groups, with the desired data in the second group.

Regex for Statement Date and Account Number strings

# Open the PDF file
with pdfplumber.open(file_path) as pdf:

I next transformed the Statement Date into "yyyymmdd" format. This makes it easier to query and sort data.

regex_date=r'Statement for \b([A-Za-z]{3}-\d{4})\b'
regex_acc_no=r'Account Number ([A-Za-z]\d{7})'

match_date is a variable declared when a string matching the regex is found.

Step 3: Create tabular data

The hard yards - extracting the relevant datapoints - were pretty much done at this point.

Next, I used pandas' DataFrame() function to create tabular data based on the output in Step 2 and Step 3. I also used this function to drop unnecessary columns and rows.

The end result can then be easily written to a CSV or stored in a database.

Step 4: Write data to CSV file

I used Python's write_to_csv() function to write each dataframe to a CSV file.

 if match_date:
    # Convert string to a mmm-yyyy date
    date_obj=datetime.strptime(match_date.group(1),"%b-%Y")
    # Get last day of the month
    last_day=calendar.monthrange(date_obj.year,date_obj.month[1]
    # Replace day with last day of month
    last_day_of_month=date_obj.replace(day=last_day)
    statement_date=last_day_of_month.strftime("%Y%m%d")

df_cash_selected is the Cash Holdings dataframe while file_cash_holdings is the file name of the Cash Holdings CSV.

➡️ I will write the data to a proper database once I have acquired some database know-how.

Next Steps

A working script is now in place to extract table and text data from the Custodian Statement PDF.

Before I proceed further, I will run some tests to see if the script is working as expected.

--Ends

The above is the detailed content of # | Automate PDF data extraction: Build. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn