Home  >  Article  >  Backend Development  >  How to do anti-crawling in python

How to do anti-crawling in python

(*-*)浩
(*-*)浩Original
2019-07-01 14:10:112946browse

A web crawler is a program that automatically extracts web pages. It downloads web pages from the World Wide Web for search engines and is an important component of search engines. But when web crawlers are abused, too many homogeneous things appear on the Internet, and originality cannot be protected. As a result, many websites began to fight against web crawlers and tried every means to protect their content.

How to do anti-crawling in python

1: User-Agent Referer detection (recommended learning: Python video tutorial)

User- Agent is a field in the HTTP protocol, and its role is to describe some information about the terminal that issues the HTTP request.

Enables the server to identify the operating system and version, CPU type, browser and version, browser rendering engine, browser language, browser plug-in, etc. used by the customer.

The server can know who is visiting the website through this field. Block users who are not normal browsers.

Solution:

Disguise the User-Agent of the browser, because the User-Agent of each browser is different, and all users can Use a browser. All UA detection can be solved by conditioning the browser's User-Agent on each request.

Referer is part of the header. When the browser sends a request to the web server, it usually brings the Referer and tells the server Which page did I link to from? For example, some picture websites will detect your Referer value when you request a picture. If the Referer does not match, normal pictures will not be returned.

Solution:

In the request to detect the referer, carry the matching referer value.

2: js obfuscation and rendering

The so-called JavaScript obfuscation is basically:

1. Remove some things that are not actually called The function.

2. Merge scattered variable declarations.

3. Simplification of logical functions.

4. Simplification of variable names. It depends on the pros and cons of different compression tools. Common tools include UglifyJS, JScrambler and other tools.

js rendering is actually the modification of the HTML page. For example, some web pages themselves do not return data. The data is added to HTML after js loading. When encountering this situation, we need to know that the crawler will not perform JavaScript operations. So it needs to be dealt with in other ways.

Solution:

1. Find the key code by reading the website js source code and implement it in python.

2. Find the key code by reading the website js source code, and use PyV8, execjs and other libraries to directly execute the js code.

3. Directly simulate the browser environment through the selenium library

3: IP restriction frequency

WEB systems all use the http protocol to connect to the WEB container Yes, each request will generate at least one TCP connection between the client and the server.

For the server, you can clearly see the requests initiated by an IP address within the unit time.

When the number of requests exceeds a certain value, it can be determined as an abnormal user request.

Solution:

1. Design the IP proxy pool by yourself, and carry a different proxy address with each request through rotation.

2. ADSL dynamic dialing has a unique feature. Every time you dial a number, you get a new IP. That is, its IP is not fixed.

Four: Verification code

Verification code (CAPTCHA) is a "Completely Automated PublicTuring test to tell Computers and HumansApart" ) is a public, fully automated program that distinguishes whether the user is a computer or a human.

It can prevent: malicious cracking of passwords, ticket fraud, forum flooding, and effectively prevents a hacker from making continuous login attempts on a specific registered user using a specific program to violently crack.

This question can be generated and judged by a computer, but only a human can answer it. Since computers cannot answer CAPTCHA questions, the user who answers the questions can be considered a human.

Solution:

1. Manually identify verification codes

2.pytesseract identifies simple verification codes

3.Docking Coding platform

4. Machine learning

For more Python-related technical articles, please visit the Python Tutorial column to learn!

The above is the detailed content of How to do anti-crawling in python. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn