search
HomeBackend DevelopmentPython TutorialA brief discussion on the encoding processing of Python crawling web pages

Background

During the Mid-Autumn Festival, a friend sent me an email, saying that when he was climbing Lianjia, he found that the codes returned by the web page were all garbled. He asked me to help him with his advice (working overtime during the Mid-Autumn Festival, so dedicated = =!). In fact, I have encountered this problem very early. I read it a little bit when I was reading novels, but I didn’t take it seriously. In fact, this problem is right Caused by poor understanding of coding.

Question

A very common crawler code, the code is like this:

# ecoding=utf-8
import re
import requests
import sys
reload(sys)
sys.setdefaultencoding('utf8')

url = 'http://jb51.net/ershoufang/rs%E6%8B%9B%E5%95%86%E6%9E%9C%E5%B2%AD/'
res = requests.get(url)
print res.text

The purpose is actually very simple, which is to crawl the content of Lianjia. However, after executing this, the returned results and all the content involving Chinese will become garbled, such as this

A brief discussion on the encoding processing of Python crawling web pages

<script type="text/template" id="newAddHouseTpl">
 <p class="newAddHouse">
  自从您上次浏览(<%=time%>ï¼‰ä¹‹åŽï¼Œè¯¥æœç´¢æ¡ä»¶ä¸‹æ–°å¢žåŠ äº†<%=count%>套房源
  <a href="<%=url%>" class="LOGNEWERSHOUFANGSHOW" <%=logText%>><%=linkText%></a>
  <span class="newHouseRightClose">x</span>
 </p>
</script>

Such data can be said to be useless.

Problem Analysis

The problem here is obvious, that is, the encoding of the text is incorrect, resulting in garbled characters.

View the encoding of the web page

From the header of the crawled target web page, the web page is encoded with utf-8.

<meta http-equiv="Content-Type" content="text/html; charset=utf-8">

So, we must use utf-8 for the final encoding, that is to say, the final text processing must use utf-8 to decode, that is: decode('utf-8')

Text encoding and decoding

Python encoding and decoding The process is like this, the source file ===》 encode (encoding method) ===》decode (decoding method), to a large extent, it is not recommended to use

import sys
reload(sys)
sys.setdefaultencoding(&#39;utf8&#39;)

This way to hard-process text encoding. However, laziness is not a big problem if it does not affect you at certain times. However, it is recommended to use encode and decode to process the text after obtaining the source file.

Back to the question

The biggest problem now is the encoding method of the source file. When we use requests normally, it will automatically guess the source The encoding method of the file is then transcoded into Unicode encoding. However, after all, it is a program and it is possible to guess wrong, so if we guess wrong, we need to manually specify the encoding method. The official document describes it as follows:

When you make a request, Requests makes educated guesses about the encoding of the response based on the HTTP headers. The text encoding guessed by Requests is used when you access r.text. You can find out what encoding Requests is using, and change it, using the r.encoding property.

So we need to check out what encoding method is returned by requests?

# ecoding=utf-8
import re
import requests
from bs4 import BeautifulSoup
import sys
reload(sys)
sys.setdefaultencoding(&#39;utf8&#39;)

url = 'http://jb51.net/ershoufang/rs%E6%8B%9B%E5%95%86%E6%9E%9C%E5%B2%AD/'

res = requests.get(url)
print res.encoding

The printed results are as follows:

ISO-8859-1

In other words, the source file is encoded using ISO-8859-1. Baidu searched for ISO-8859-1 and the results are as follows:

ISO8859-1, usually called Latin-1. Latin-1 includes additional characters indispensable for writing all Western European languages.

Problem Solving

After discovering this stuff, the problem is easily solved. As long as you specify the encoding, you can type Chinese correctly. The code is as follows:

# ecoding=utf-8
import requests
import sys
reload(sys)
sys.setdefaultencoding(&#39;utf8&#39;)

url = 'http://jb51.net/ershoufang/rs%E6%8B%9B%E5%95%86%E6%9E%9C%E5%B2%AD/'

res = requests.get(url)
res.encoding = ('utf8')

print res.text

The printed result is obvious, and the Chinese characters are displayed correctly.

A brief discussion on the encoding processing of Python crawling web pages

Another way is to decode and encode the source file. The code is as follows:

# ecoding=utf-8
import requests
import sys
reload(sys)
sys.setdefaultencoding(&#39;utf8&#39;)

url = 'http://jb51.net/ershoufang/rs%E6%8B%9B%E5%95%86%E6%9E%9C%E5%B2%AD/'

res = requests.get(url)
# res.encoding = ('utf8')

print res.text.encode('ISO-8859-1').decode('utf-8')

Another: ISO-8859-1 is also called latin1, and it is normal to use latin1 for decoding results.

Regarding character encoding, there are many things that can be said. Friends who want to know more can refer to the following information.

•《The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)》

The above article briefly discusses the coding processing of Python crawling web pages. I have compiled all the content shared with you. I hope it can give you a reference. I also hope that everyone will support the PHP Chinese website.

For more articles on coding and processing of crawling web pages with Python, please pay attention to the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
How does the choice between lists and arrays impact the overall performance of a Python application dealing with large datasets?How does the choice between lists and arrays impact the overall performance of a Python application dealing with large datasets?May 03, 2025 am 12:11 AM

ForhandlinglargedatasetsinPython,useNumPyarraysforbetterperformance.1)NumPyarraysarememory-efficientandfasterfornumericaloperations.2)Avoidunnecessarytypeconversions.3)Leveragevectorizationforreducedtimecomplexity.4)Managememoryusagewithefficientdata

Explain how memory is allocated for lists versus arrays in Python.Explain how memory is allocated for lists versus arrays in Python.May 03, 2025 am 12:10 AM

InPython,listsusedynamicmemoryallocationwithover-allocation,whileNumPyarraysallocatefixedmemory.1)Listsallocatemorememorythanneededinitially,resizingwhennecessary.2)NumPyarraysallocateexactmemoryforelements,offeringpredictableusagebutlessflexibility.

How do you specify the data type of elements in a Python array?How do you specify the data type of elements in a Python array?May 03, 2025 am 12:06 AM

InPython, YouCansSpectHedatatYPeyFeLeMeReModelerErnSpAnT.1) UsenPyNeRnRump.1) UsenPyNeRp.DLOATP.PLOATM64, Formor PrecisconTrolatatypes.

What is NumPy, and why is it important for numerical computing in Python?What is NumPy, and why is it important for numerical computing in Python?May 03, 2025 am 12:03 AM

NumPyisessentialfornumericalcomputinginPythonduetoitsspeed,memoryefficiency,andcomprehensivemathematicalfunctions.1)It'sfastbecauseitperformsoperationsinC.2)NumPyarraysaremorememory-efficientthanPythonlists.3)Itoffersawiderangeofmathematicaloperation

Discuss the concept of 'contiguous memory allocation' and its importance for arrays.Discuss the concept of 'contiguous memory allocation' and its importance for arrays.May 03, 2025 am 12:01 AM

Contiguousmemoryallocationiscrucialforarraysbecauseitallowsforefficientandfastelementaccess.1)Itenablesconstanttimeaccess,O(1),duetodirectaddresscalculation.2)Itimprovescacheefficiencybyallowingmultipleelementfetchespercacheline.3)Itsimplifiesmemorym

How do you slice a Python list?How do you slice a Python list?May 02, 2025 am 12:14 AM

SlicingaPythonlistisdoneusingthesyntaxlist[start:stop:step].Here'showitworks:1)Startistheindexofthefirstelementtoinclude.2)Stopistheindexofthefirstelementtoexclude.3)Stepistheincrementbetweenelements.It'susefulforextractingportionsoflistsandcanuseneg

What are some common operations that can be performed on NumPy arrays?What are some common operations that can be performed on NumPy arrays?May 02, 2025 am 12:09 AM

NumPyallowsforvariousoperationsonarrays:1)Basicarithmeticlikeaddition,subtraction,multiplication,anddivision;2)Advancedoperationssuchasmatrixmultiplication;3)Element-wiseoperationswithoutexplicitloops;4)Arrayindexingandslicingfordatamanipulation;5)Ag

How are arrays used in data analysis with Python?How are arrays used in data analysis with Python?May 02, 2025 am 12:09 AM

ArraysinPython,particularlythroughNumPyandPandas,areessentialfordataanalysis,offeringspeedandefficiency.1)NumPyarraysenableefficienthandlingoflargedatasetsandcomplexoperationslikemovingaverages.2)PandasextendsNumPy'scapabilitieswithDataFramesforstruc

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version