This article shares with you the method and code of using python crawler to convert "Liao Xuefeng's Python Tutorial" into PDF. Friends in need can refer to it.
It seems that there is no easier way to write a crawler than using Python. It's appropriate. There are so many crawler tools provided by the Python community that you will be dazzled. With various libraries that can be used directly, you can write a crawler in minutes. Today I am thinking about writing a crawler and crawling down Liao Xuefeng's Python tutorial. Create a PDF e-book for everyone to read offline.
Before we start writing the crawler, let’s first analyze the page structure of the website 1. The left side of the web page is the directory outline of the tutorial. Each URL corresponds to an article on the right. The upper right side is the article’s The title, in the middle is the text part of the article. The text content is the focus of our concern. The data we want to crawl is the text part of all web pages. Below is the user's comment area. The comment area is of no use to us, so we can ignore it.
Tool preparation
After you have figured out the basic structure of the website, you can start preparing the tool kits that the crawler depends on. requests and beautifulsoup are two artifacts of crawlers, reuqests is used for network requests, and beautifusoup is used to operate html data. With these two shuttles, we can work quickly. We don’t need crawlers like scrapyframework. Using it in small programs is like killing a chicken with a sledgehammer. In addition, since you are converting html files to pdf, you must also have corresponding library support. wkhtmltopdf is a very good tool, which can convert html to pdf for multiple platforms. pdfkit is the Python package of wkhtmltopdf. FirstInstallthe following dependency packages,
Then install wkhtmltopdf
pip install requests pip install beautifulsoup pip install pdfkit
Install wkhtmltopdf
Windows platform directly on the wkhtmltopdf official website 2 Download the stable version and install it. After the installation is completed, add the execution path of the program to the system environment $PATH variable , otherwise pdfkit cannot find wkhtmltopdf and the error "No wkhtmltopdf executable found" will appear. Ubuntu and CentOS can be installed directly using the command line
$ sudo apt-get install wkhtmltopdf # ubuntu $ sudo yum intsall wkhtmltopdf # centos
Crawler implementation
After everything is ready, you can start coding, but you should sort out your thoughts before writing code . The purpose of the program is to save the html text parts corresponding to all URLs locally, and then use pdfkit to convert these files into a pdf file. Let's split the task. First, save the html text corresponding to a certain URL locally, and then find all URLs and perform the same operation.
Use the Chrome browser to find the tag in the body part of the page, and press F12 to find the p tag corresponding to the body: <p></p>
, where p is the body content of the web page. After using requests to load the entire page locally, you can use beautifulsoup to operate the HTML dom element to extract the text content.
The specific implementation code is as follows: Use soup.find_all function to find the text tag, and then save the content of the text part to the a.html file.
def parse_url_to_html(url): response = requests.get(url) soup = BeautifulSoup(response.content, "html5lib") body = soup.find_all(class_="x-wiki-content")[0] html = str(body) with open("a.html", 'wb') as f: f.write(html)
The second step is to parse out all the URLs on the left side of the page. Use the same method to find the left menu label <ul></ul>
def get_url_list(): """ 获取所有URL目录列表 """ response = requests.get("http://www.liaoxuefeng.com/wiki/0014316089557264a6b348958f449949df42a6d3a2e542c000") soup = BeautifulSoup(response.content, "html5lib") menu_tag = soup.find_all(class_="uk-nav uk-nav-side")[1] urls = [] for li in menu_tag.find_all("li"): url = "http://www.liaoxuefeng.com" + li.a.get('href') urls.append(url) return urlsThe last step is to convert the html into a pdf file. Converting to a pdf file is very simple, because pdfkit has encapsulated all the logic. You only need to call the function pdfkit.from_file
def save_pdf(htmls): """ 把所有html文件转换成pdf文件 """ options = { 'page-size': 'Letter', 'encoding': "UTF-8", 'custom-header': [ ('Accept-Encoding', 'gzip') ] } pdfkit.from_file(htmls, file_name, options=options)to execute the save_pdf function, and the e-book pdf file will be generated. The rendering: <p></p>
Summary
The total amount of code adds up to less than 50 lines. However, wait, in fact, the code given above omits some details. , for example, how to get the title of the article, the img tag of the text content uses a relative path, if you want to display thepicture normally in the pdf, you need to change the relative path to an absolute path, and save it Temporary html files must be delete, and these details are all posted on github.
【related suggestion】2. Python Object-Oriented Video Tutorial
The above is the detailed content of Convert data captured by python crawler into PDF. For more information, please follow other related articles on the PHP Chinese website!

ThedifferencebetweenaforloopandawhileloopinPythonisthataforloopisusedwhenthenumberofiterationsisknowninadvance,whileawhileloopisusedwhenaconditionneedstobecheckedrepeatedlywithoutknowingthenumberofiterations.1)Forloopsareidealforiteratingoversequence

In Python, for loops are suitable for cases where the number of iterations is known, while loops are suitable for cases where the number of iterations is unknown and more control is required. 1) For loops are suitable for traversing sequences, such as lists, strings, etc., with concise and Pythonic code. 2) While loops are more appropriate when you need to control the loop according to conditions or wait for user input, but you need to pay attention to avoid infinite loops. 3) In terms of performance, the for loop is slightly faster, but the difference is usually not large. Choosing the right loop type can improve the efficiency and readability of your code.

In Python, lists can be merged through five methods: 1) Use operators, which are simple and intuitive, suitable for small lists; 2) Use extend() method to directly modify the original list, suitable for lists that need to be updated frequently; 3) Use list analytical formulas, concise and operational on elements; 4) Use itertools.chain() function to efficient memory and suitable for large data sets; 5) Use * operators and zip() function to be suitable for scenes where elements need to be paired. Each method has its specific uses and advantages and disadvantages, and the project requirements and performance should be taken into account when choosing.

Forloopsareusedwhenthenumberofiterationsisknown,whilewhileloopsareuseduntilaconditionismet.1)Forloopsareidealforsequenceslikelists,usingsyntaxlike'forfruitinfruits:print(fruit)'.2)Whileloopsaresuitableforunknowniterationcounts,e.g.,'whilecountdown>

ToconcatenatealistoflistsinPython,useextend,listcomprehensions,itertools.chain,orrecursivefunctions.1)Extendmethodisstraightforwardbutverbose.2)Listcomprehensionsareconciseandefficientforlargerdatasets.3)Itertools.chainismemory-efficientforlargedatas

TomergelistsinPython,youcanusethe operator,extendmethod,listcomprehension,oritertools.chain,eachwithspecificadvantages:1)The operatorissimplebutlessefficientforlargelists;2)extendismemory-efficientbutmodifiestheoriginallist;3)listcomprehensionoffersf

In Python 3, two lists can be connected through a variety of methods: 1) Use operator, which is suitable for small lists, but is inefficient for large lists; 2) Use extend method, which is suitable for large lists, with high memory efficiency, but will modify the original list; 3) Use * operator, which is suitable for merging multiple lists, without modifying the original list; 4) Use itertools.chain, which is suitable for large data sets, with high memory efficiency.

Using the join() method is the most efficient way to connect strings from lists in Python. 1) Use the join() method to be efficient and easy to read. 2) The cycle uses operators inefficiently for large lists. 3) The combination of list comprehension and join() is suitable for scenarios that require conversion. 4) The reduce() method is suitable for other types of reductions, but is inefficient for string concatenation. The complete sentence ends.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

SublimeText3 English version
Recommended: Win version, supports code prompts!

SublimeText3 Chinese version
Chinese version, very easy to use

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.
