search
HomeBackend DevelopmentPython TutorialA brief discussion on the encoding processing of Python crawling web pages

Background

During the Mid-Autumn Festival, a friend sent me an email, saying that when he was climbing Lianjia, he found that the codes returned by the web page were all garbled. He asked me to help him with his advice (working overtime during the Mid-Autumn Festival, so dedicated = =!). In fact, I have encountered this problem very early. I read it a little bit when I was reading novels, but I didn’t take it seriously. In fact, this problem is right Caused by poor understanding of coding.

Question

A very common crawler code, the code is like this:

# ecoding=utf-8
import re
import requests
import sys
reload(sys)
sys.setdefaultencoding('utf8')

url = 'http://jb51.net/ershoufang/rs%E6%8B%9B%E5%95%86%E6%9E%9C%E5%B2%AD/'
res = requests.get(url)
print res.text

The purpose is actually very simple, which is to crawl the content of Lianjia. However, after executing this, the returned results and all the content involving Chinese will become garbled, such as this

A brief discussion on the encoding processing of Python crawling web pages

<script type="text/template" id="newAddHouseTpl">
 <p class="newAddHouse">
  自从您上次浏览(<%=time%>ï¼‰ä¹‹åŽï¼Œè¯¥æœç´¢æ¡ä»¶ä¸‹æ–°å¢žåŠ äº†<%=count%>套房源
  <a href="<%=url%>" class="LOGNEWERSHOUFANGSHOW" <%=logText%>><%=linkText%></a>
  <span class="newHouseRightClose">x</span>
 </p>
</script>

Such data can be said to be useless.

Problem Analysis

The problem here is obvious, that is, the encoding of the text is incorrect, resulting in garbled characters.

View the encoding of the web page

From the header of the crawled target web page, the web page is encoded with utf-8.

<meta http-equiv="Content-Type" content="text/html; charset=utf-8">

So, we must use utf-8 for the final encoding, that is to say, the final text processing must use utf-8 to decode, that is: decode('utf-8')

Text encoding and decoding

Python encoding and decoding The process is like this, the source file ===》 encode (encoding method) ===》decode (decoding method), to a large extent, it is not recommended to use

import sys
reload(sys)
sys.setdefaultencoding(&#39;utf8&#39;)

This way to hard-process text encoding. However, laziness is not a big problem if it does not affect you at certain times. However, it is recommended to use encode and decode to process the text after obtaining the source file.

Back to the question

The biggest problem now is the encoding method of the source file. When we use requests normally, it will automatically guess the source The encoding method of the file is then transcoded into Unicode encoding. However, after all, it is a program and it is possible to guess wrong, so if we guess wrong, we need to manually specify the encoding method. The official document describes it as follows:

When you make a request, Requests makes educated guesses about the encoding of the response based on the HTTP headers. The text encoding guessed by Requests is used when you access r.text. You can find out what encoding Requests is using, and change it, using the r.encoding property.

So we need to check out what encoding method is returned by requests?

# ecoding=utf-8
import re
import requests
from bs4 import BeautifulSoup
import sys
reload(sys)
sys.setdefaultencoding(&#39;utf8&#39;)

url = 'http://jb51.net/ershoufang/rs%E6%8B%9B%E5%95%86%E6%9E%9C%E5%B2%AD/'

res = requests.get(url)
print res.encoding

The printed results are as follows:

ISO-8859-1

In other words, the source file is encoded using ISO-8859-1. Baidu searched for ISO-8859-1 and the results are as follows:

ISO8859-1, usually called Latin-1. Latin-1 includes additional characters indispensable for writing all Western European languages.

Problem Solving

After discovering this stuff, the problem is easily solved. As long as you specify the encoding, you can type Chinese correctly. The code is as follows:

# ecoding=utf-8
import requests
import sys
reload(sys)
sys.setdefaultencoding(&#39;utf8&#39;)

url = 'http://jb51.net/ershoufang/rs%E6%8B%9B%E5%95%86%E6%9E%9C%E5%B2%AD/'

res = requests.get(url)
res.encoding = ('utf8')

print res.text

The printed result is obvious, and the Chinese characters are displayed correctly.

A brief discussion on the encoding processing of Python crawling web pages

Another way is to decode and encode the source file. The code is as follows:

# ecoding=utf-8
import requests
import sys
reload(sys)
sys.setdefaultencoding(&#39;utf8&#39;)

url = 'http://jb51.net/ershoufang/rs%E6%8B%9B%E5%95%86%E6%9E%9C%E5%B2%AD/'

res = requests.get(url)
# res.encoding = ('utf8')

print res.text.encode('ISO-8859-1').decode('utf-8')

Another: ISO-8859-1 is also called latin1, and it is normal to use latin1 for decoding results.

Regarding character encoding, there are many things that can be said. Friends who want to know more can refer to the following information.

•《The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)》

The above article briefly discusses the coding processing of Python crawling web pages. I have compiled all the content shared with you. I hope it can give you a reference. I also hope that everyone will support the PHP Chinese website.

For more articles on coding and processing of crawling web pages with Python, please pay attention to the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
How Do I Use Beautiful Soup to Parse HTML?How Do I Use Beautiful Soup to Parse HTML?Mar 10, 2025 pm 06:54 PM

This article explains how to use Beautiful Soup, a Python library, to parse HTML. It details common methods like find(), find_all(), select(), and get_text() for data extraction, handling of diverse HTML structures and errors, and alternatives (Sel

Mathematical Modules in Python: StatisticsMathematical Modules in Python: StatisticsMar 09, 2025 am 11:40 AM

Python's statistics module provides powerful data statistical analysis capabilities to help us quickly understand the overall characteristics of data, such as biostatistics and business analysis. Instead of looking at data points one by one, just look at statistics such as mean or variance to discover trends and features in the original data that may be ignored, and compare large datasets more easily and effectively. This tutorial will explain how to calculate the mean and measure the degree of dispersion of the dataset. Unless otherwise stated, all functions in this module support the calculation of the mean() function instead of simply summing the average. Floating point numbers can also be used. import random import statistics from fracti

Serialization and Deserialization of Python Objects: Part 1Serialization and Deserialization of Python Objects: Part 1Mar 08, 2025 am 09:39 AM

Serialization and deserialization of Python objects are key aspects of any non-trivial program. If you save something to a Python file, you do object serialization and deserialization if you read the configuration file, or if you respond to an HTTP request. In a sense, serialization and deserialization are the most boring things in the world. Who cares about all these formats and protocols? You want to persist or stream some Python objects and retrieve them in full at a later time. This is a great way to see the world on a conceptual level. However, on a practical level, the serialization scheme, format or protocol you choose may determine the speed, security, freedom of maintenance status, and other aspects of the program

How to Perform Deep Learning with TensorFlow or PyTorch?How to Perform Deep Learning with TensorFlow or PyTorch?Mar 10, 2025 pm 06:52 PM

This article compares TensorFlow and PyTorch for deep learning. It details the steps involved: data preparation, model building, training, evaluation, and deployment. Key differences between the frameworks, particularly regarding computational grap

What are some popular Python libraries and their uses?What are some popular Python libraries and their uses?Mar 21, 2025 pm 06:46 PM

The article discusses popular Python libraries like NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, Django, Flask, and Requests, detailing their uses in scientific computing, data analysis, visualization, machine learning, web development, and H

Scraping Webpages in Python With Beautiful Soup: Search and DOM ModificationScraping Webpages in Python With Beautiful Soup: Search and DOM ModificationMar 08, 2025 am 10:36 AM

This tutorial builds upon the previous introduction to Beautiful Soup, focusing on DOM manipulation beyond simple tree navigation. We'll explore efficient search methods and techniques for modifying HTML structure. One common DOM search method is ex

How to Create Command-Line Interfaces (CLIs) with Python?How to Create Command-Line Interfaces (CLIs) with Python?Mar 10, 2025 pm 06:48 PM

This article guides Python developers on building command-line interfaces (CLIs). It details using libraries like typer, click, and argparse, emphasizing input/output handling, and promoting user-friendly design patterns for improved CLI usability.

Explain the purpose of virtual environments in Python.Explain the purpose of virtual environments in Python.Mar 19, 2025 pm 02:27 PM

The article discusses the role of virtual environments in Python, focusing on managing project dependencies and avoiding conflicts. It details their creation, activation, and benefits in improving project management and reducing dependency issues.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.