search
HomeBackend DevelopmentPython TutorialConscience recommendation! 8 essential skills for Python crawler masters!

Conscience recommendation! 8 essential skills for Python crawler masters!

If you want to quickly learn crawlers, the most worthwhile language to learn must be Python. Python has many application scenarios, such as: rapid web development, crawlers, automated operation and maintenance, etc. It can be done simply Website, automatic posting script, email sending and receiving script, simple verification code recognition script.

There are also many reuse processes in the development process of crawlers. Today I will summarize the 8 essential skills, which can save time and effort in the future and complete tasks efficiently.

1. Basic crawling of web pages

get method

import urllib2
url = "http://www.baidu.com"
response = urllib2.urlopen(url)
print response.read()

post method

import urllib
import urllib2
url = "http://abcde.com"
form = {'name':'abc','password':'1234'}
form_data = urllib.urlencode(form)
request = urllib2.Request(url,form_data)
response = urllib2.urlopen(request)
print response.read()

2.Use proxy IP

In the process of developing crawlers, we often encounter situations where the IP is blocked. In this case, we need to use the proxy IP; there is a ProxyHandler class in the urllib2 package. Through this class, we can set up a proxy to access the web page, as shown in the following code snippet:

import urllib2
proxy = urllib2.ProxyHandler({'http': '127.0.0.1:8087'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
response = urllib2.urlopen('http://www.baidu.com')
print response.read()

3. Cookies processing

Cookies are data (usually encrypted) stored on the user's local terminal by some websites in order to identify the user's identity and perform session tracking. Python provides the cookielib module for processing cookies. , the main function of the cookielib module is to provide objects that can store cookies, so that it can be used in conjunction with the urllib2 module to access Internet resources. Search the public account on WeChat: Architect Guide, reply: Architect Get Information.

Code snippet:

import urllib2, cookielib
cookie_support= urllib2.HTTPCookieProcessor(cookielib.CookieJar())
opener = urllib2.build_opener(cookie_support)
urllib2.install_opener(opener)
content = urllib2.urlopen('http://XXXX').read()

The key is CookieJar(), which is used to manage HTTP cookie values, store cookies generated by HTTP requests, and add cookie objects to outgoing HTTP requests. . The entire cookie is stored in memory, and the cookie will be lost after garbage collection of the CookieJar instance. All processes do not need to be operated separately.

Add cookies manually:

cookie = "PHPSESSID=91rurfqm2329bopnosfu4fvmu7; kmsign=55d2c12c9b1e3; KMUID=b6Ejc1XSwPq9o756AxnBAg="
request.add_header("Cookie", cookie)

4. Disguise as a browser

Some websites are disgusted with the visit of crawlers, so they reject requests from crawlers. Therefore, HTTP Error 403: Forbidden often occurs when using urllib2 to directly access the website.

Pay special attention to some headers. The server will check these headers:

  • User-Agent Some servers or Proxy will check this value. Use To determine whether it is a Request initiated by the browser
  • Content-Type When using the REST interface, the Server will check this value to determine how the content in the HTTP Body should be parsed

This can be achieved by modifying the header in the http package. The code snippet is as follows:

import urllib2
headers = {
 'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'
}
request = urllib2.Request(
 url = 'http://my.oschina.net/jhao104/blog?catalog=3463517',
 headers = headers
)
print urllib2.urlopen(request).read()

5. Page parsing

The most powerful tool for page parsing is of course regular expressions. Expression, this is different for different users of different websites, so there is no need to explain too much

The second is the parsing library, the two commonly used ones are lxml and BeautifulSoup

For these two libraries, my evaluation is that they are both HTML/XML processing libraries. Beautifulsoup is implemented purely in python, which is inefficient, but has practical functions. For example, the source code of an HTML node can be obtained through search results; lxml C language coding, efficient, supports Xpath.

6. Processing of verification codes

For some simple verification codes, simple identification can be performed. I have only done some simple verification code recognition. However, some anti-human verification codes, such as 12306, can be manually coded through the coding platform. Of course, this requires a fee.

7. Gzip compression

Have you ever encountered some web pages that are garbled no matter how they are transcoded? Haha, that means you don’t know that many web services have the ability to send compressed data, which can reduce the large amount of data transmitted on network lines by more than 60%. This is especially true for XML web services, since XML data can be compressed to a very high degree.

But generally the server will not send compressed data for you unless you tell the server that you can handle compressed data.

So you need to modify the code like this:

import urllib2, httplib
request = urllib2.Request('http://xxxx.com')
request.add_header('Accept-encoding', 'gzip')
opener = urllib2.build_opener()
f = opener.open(request)
  • This is the key: create a Request object and add an Accept-encoding header to tell the server that you can accept gzip compressed data.

Then it’s time to decompress the data:

import StringIO
import gzip
compresseddata = f.read()
compressedstream = StringIO.StringIO(compresseddata)
gzipper = gzip.GzipFile(fileobj=compressedstream)
print gzipper.read()

8. Multi-threaded concurrent crawling

If a single thread is too slow, multi-threading is needed. Here is one This simple thread pool template program simply prints 1-10, but it can be seen that it is concurrent.

Although Python's multi-threading is useless, it can still improve efficiency to a certain extent for network-frequent crawlers.

from threading import Thread
from Queue import Queue
from time import sleep
# q是任务队列
#NUM是并发线程总数
#JOBS是有多少任务
q = Queue()
NUM = 2
JOBS = 10
#具体的处理函数,负责处理单个任务
def do_somthing_using(arguments):
 print arguments
#这个是工作进程,负责不断从队列取数据并处理
def working():
 while True:
 arguments = q.get()
 do_somthing_using(arguments)
 sleep(1)
 q.task_done()
#fork NUM个线程等待队列
for i in range(NUM):
 t = Thread(target=working)
 t.setDaemon(True)
 t.start()
#把JOBS排入队列
for i in range(JOBS):
 q.put(i)
#等待所有JOBS完成
q.join()

The above is the detailed content of Conscience recommendation! 8 essential skills for Python crawler masters!. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
详细讲解Python之Seaborn(数据可视化)详细讲解Python之Seaborn(数据可视化)Apr 21, 2022 pm 06:08 PM

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于Seaborn的相关问题,包括了数据可视化处理的散点图、折线图、条形图等等内容,下面一起来看一下,希望对大家有帮助。

详细了解Python进程池与进程锁详细了解Python进程池与进程锁May 10, 2022 pm 06:11 PM

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于进程池与进程锁的相关问题,包括进程池的创建模块,进程池函数等等内容,下面一起来看一下,希望对大家有帮助。

Python自动化实践之筛选简历Python自动化实践之筛选简历Jun 07, 2022 pm 06:59 PM

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于简历筛选的相关问题,包括了定义 ReadDoc 类用以读取 word 文件以及定义 search_word 函数用以筛选的相关内容,下面一起来看一下,希望对大家有帮助。

归纳总结Python标准库归纳总结Python标准库May 03, 2022 am 09:00 AM

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于标准库总结的相关问题,下面一起来看一下,希望对大家有帮助。

分享10款高效的VSCode插件,总有一款能够惊艳到你!!分享10款高效的VSCode插件,总有一款能够惊艳到你!!Mar 09, 2021 am 10:15 AM

VS Code的确是一款非常热门、有强大用户基础的一款开发工具。本文给大家介绍一下10款高效、好用的插件,能够让原本单薄的VS Code如虎添翼,开发效率顿时提升到一个新的阶段。

Python数据类型详解之字符串、数字Python数据类型详解之字符串、数字Apr 27, 2022 pm 07:27 PM

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于数据类型之字符串、数字的相关问题,下面一起来看一下,希望对大家有帮助。

详细介绍python的numpy模块详细介绍python的numpy模块May 19, 2022 am 11:43 AM

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于numpy模块的相关问题,Numpy是Numerical Python extensions的缩写,字面意思是Python数值计算扩展,下面一起来看一下,希望对大家有帮助。

python中文是什么意思python中文是什么意思Jun 24, 2019 pm 02:22 PM

pythn的中文意思是巨蟒、蟒蛇。1989年圣诞节期间,Guido van Rossum在家闲的没事干,为了跟朋友庆祝圣诞节,决定发明一种全新的脚本语言。他很喜欢一个肥皂剧叫Monty Python,所以便把这门语言叫做python。

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor