search
HomeBackend DevelopmentPython TutorialIn-depth understanding of Python distributed crawler principles

python video tutorial column introduces the principle of distributed crawler.

In-depth understanding of Python distributed crawler principles

Free recommendation: python video tutorial

First of all, let’s do it first Let’s see how people obtain web content if it is normal human behavior.

(1) Open the browser, enter the URL, and open the source web page

(2) Select the content we want, including title, author, abstract, text, etc. Information

(3) Store in hard disk

The above three processes, mapped to the technical level, are actually: network request, capturing structured data, and data storage.

We use Python to write a simple program to implement the simple crawling function above.

#!/usr/bin/python 
#-*- coding: utf-8 -*- 
''''' 
Created on 2014-03-16 
 
@author: Kris 
''' 
import urllib2, re, cookielib 
 
def httpCrawler(url): 
  ''''' 
  @summary: 网页抓取 
  ''' 
  content = httpRequest(url) 
  title = parseHtml(content) 
  saveData(title) 
 
def httpRequest(url): 
  ''''' 
  @summary: 网络请求 
  '''  
  try: 
    ret = None 
    SockFile = None 
    request = urllib2.Request(url) 
    request.add_header('User-Agent', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; SV1; .NET CLR 1.1.4322)') 
    request.add_header('Pragma', 'no-cache') 
    opener = urllib2.build_opener() 
    SockFile = opener.open(request) 
    ret = SockFile.read() 
  finally: 
    if SockFile: 
      SockFile.close() 
     
  return ret 
 
def parseHtml(html): 
  ''''' 
  @summary: 抓取结构化数据 
  ''' 
  content = None 
  pattern = &#39;<title>([^<]*?)</title>&#39; 
  temp = re.findall(pattern, html) 
  if temp: 
    content = temp[0] 
   
  return content 
   
def saveData(data): 
  &#39;&#39;&#39;&#39;&#39; 
  @summary: 数据存储 
  &#39;&#39;&#39; 
  f = open(&#39;test&#39;, &#39;wb&#39;) 
  f.write(data) 
  f.close() 
   
if __name__ == &#39;__main__&#39;: 
  url = &#39;http://www.baidu.com&#39; 
  httpCrawler(url)

Looks very simple, yes, it is a basic program for getting started with crawlers. Of course, implementing a collection process is nothing more than the above basic steps. But to implement a powerful collection process, you will encounter the following problems:

(1) Access with cookie information is required. For example, most social software basically requires users to log in. Only after that can we see valuable things. In fact, it is very simple. We can use the cookielib module provided by Python to achieve every visit with the cookie information given by the source website. In this way, as long as we successfully simulate the login, the crawler will be logged in. status, then we can collect all the information seen by the logged in user. The following is a modification to the httpRequest() method using cookies:

ckjar = cookielib.MozillaCookieJar() 
cookies = urllib2.HTTPCookieProcessor(ckjar)     #定义cookies对象 
def httpRequest(url): 
  &#39;&#39;&#39;&#39;&#39; 
  @summary: 网络请求 
  &#39;&#39;&#39;  
  try: 
    ret = None 
    SockFile = None 
    request = urllib2.Request(url) 
    request.add_header(&#39;User-Agent&#39;, &#39;Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; SV1; .NET CLR 1.1.4322)&#39;) 
    request.add_header(&#39;Pragma&#39;, &#39;no-cache&#39;) 
    opener = urllib2.build_opener(cookies)    #传递cookies对象 
    SockFile = opener.open(request) 
    ret = SockFile.read() 
  finally: 
    if SockFile: 
      SockFile.close() 
     
  return ret

(2) Encoding issue. There are currently two most common encodings on websites: utf-8 or gbk. When we collect the source website encoding and the encoding stored in our database is inconsistent, for example, the encoding of 163.com uses gbk, and what we need to store is utf. -8 encoded data, then we can use the encode() and decode() methods provided in Python to convert, for example:

content = content.decode(&#39;gbk&#39;, &#39;ignore&#39;)   #将gbk编码转为unicode编码 
content = content.encode(&#39;utf-8&#39;, &#39;ignore&#39;)  #将unicode编码转为utf-8编码

There is a unicode encoding in the middle, and we need to convert to the intermediate encoding unicode in order to gbk or utf-8 conversion.

(3) The tags in the web page are incomplete. For example, some source codes have a start tag but no end tag. If the HTML tag is incomplete, it will affect our ability to capture structured data. We can use Python The BeautifulSoup module first cleans the source code and then analyzes and obtains the content.

(4) Some websites use JS to save web content. When we looked directly at the source code, we found a bunch of troublesome JS code. You can use mozilla, webkit and other toolkits that can parse browsers to parse js and ajax, although the speed will be slightly slower.

(5)The picture exists in flash form. When the content in the picture is composed of text or numbers, then this is easier to handle. We only need to use OCR technology to achieve automatic recognition. However, if it is a flash link, we store the entire URL.

(6) A webpage has multiple webpage structures. If we only have one set of crawling rules, it will definitely not work. Therefore, we need to configure multiple sets of simulations to assist in crawling.

(7) Monitor the source website. After all, crawling other people's things is not a good thing, so most websites will have restrictions on crawlers prohibiting access.
A good collection system should be that no matter where our target data is, as long as it is visible to the user, we can collect it back. What-you-see-is-what-you-get unblocked collection, whether data needs to be logged in or not can be collected smoothly. Most valuable information generally requires logging in to see, such as social networking sites. In order to cope with logging in, the website must have a crawler system that simulates user login to obtain data normally. However, social websites hope to form a closed loop and are unwilling to put data outside the site. This kind of system will not be as open as news and other content. Most of these social websites will adopt some restrictions to prevent robot crawler systems from crawling data. Generally, it will not take long for an account to be crawled before it is detected and access is prohibited. Does that mean we can’t crawl data from these websites? This is definitely not the case. As long as social websites do not close web page access, we can also access the data that normal people can access. In the final analysis, it is a simulation of a person's normal behavior, which is professionally called "anti-monitoring".

The source website generally has the following restrictions:

1. The number of visits to a single IP within a certain period of time. A normal user accesses the website, unless it is random. Click to play, otherwise you will not visit a website too quickly within a certain period of time, and it will not last too long. This problem is easy to solve. We can use a large number of irregular proxy IPs to form a proxy pool, randomly select proxies from the proxy pool, and simulate access. There are two types of proxy IPs, transparent proxy and anonymous proxy.

2. The number of visits to a single account within a certain period of time. If a person accesses a data interface 24 hours a day and the speed is very fast, it may be a robot. We can use a large number of accounts with normal behavior. Normal behavior is how ordinary people operate on social networking sites, and the number of URLs visited per unit time is minimized. There can be a period of time between each visit. This time interval can be a random value. , that is, after each visit to a URL, it sleeps for a random period of time, and then visits the next URL.

If you can control the access policy of the account and IP, there will basically be no problem. Of course, the opponent's website will also have operation and maintenance strategies to adjust. In a battle between the enemy and ourselves, the crawler must be able to sense that the other party's anti-monitoring will have an impact on us, and notify the administrator to handle it in a timely manner. In fact, the most ideal thing is to be able to intelligently implement anti-monitoring confrontation through machine learning and achieve uninterrupted capture.

The following is a distributed crawler architecture diagram that I am designing recently, as shown in Figure 1:

This is purely a humble work, preliminary ideas It is being implemented and the communication between the server and the client is being established. The Socket module of Python is mainly used to realize the communication between the server and the client. If you are interested, you can contact me individually to discuss and complete a better solution.

The above is the detailed content of In-depth understanding of Python distributed crawler principles. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:jb51. If there is any infringement, please contact admin@php.cn delete
Python: Games, GUIs, and MorePython: Games, GUIs, and MoreApr 13, 2025 am 12:14 AM

Python excels in gaming and GUI development. 1) Game development uses Pygame, providing drawing, audio and other functions, which are suitable for creating 2D games. 2) GUI development can choose Tkinter or PyQt. Tkinter is simple and easy to use, PyQt has rich functions and is suitable for professional development.

Python vs. C  : Applications and Use Cases ComparedPython vs. C : Applications and Use Cases ComparedApr 12, 2025 am 12:01 AM

Python is suitable for data science, web development and automation tasks, while C is suitable for system programming, game development and embedded systems. Python is known for its simplicity and powerful ecosystem, while C is known for its high performance and underlying control capabilities.

The 2-Hour Python Plan: A Realistic ApproachThe 2-Hour Python Plan: A Realistic ApproachApr 11, 2025 am 12:04 AM

You can learn basic programming concepts and skills of Python within 2 hours. 1. Learn variables and data types, 2. Master control flow (conditional statements and loops), 3. Understand the definition and use of functions, 4. Quickly get started with Python programming through simple examples and code snippets.

Python: Exploring Its Primary ApplicationsPython: Exploring Its Primary ApplicationsApr 10, 2025 am 09:41 AM

Python is widely used in the fields of web development, data science, machine learning, automation and scripting. 1) In web development, Django and Flask frameworks simplify the development process. 2) In the fields of data science and machine learning, NumPy, Pandas, Scikit-learn and TensorFlow libraries provide strong support. 3) In terms of automation and scripting, Python is suitable for tasks such as automated testing and system management.

How Much Python Can You Learn in 2 Hours?How Much Python Can You Learn in 2 Hours?Apr 09, 2025 pm 04:33 PM

You can learn the basics of Python within two hours. 1. Learn variables and data types, 2. Master control structures such as if statements and loops, 3. Understand the definition and use of functions. These will help you start writing simple Python programs.

How to teach computer novice programming basics in project and problem-driven methods within 10 hours?How to teach computer novice programming basics in project and problem-driven methods within 10 hours?Apr 02, 2025 am 07:18 AM

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading?How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading?Apr 02, 2025 am 07:15 AM

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

What should I do if the '__builtin__' module is not found when loading the Pickle file in Python 3.6?What should I do if the '__builtin__' module is not found when loading the Pickle file in Python 3.6?Apr 02, 2025 am 07:12 AM

Error loading Pickle file in Python 3.6 environment: ModuleNotFoundError:Nomodulenamed...

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools