A web crawler is a program that automatically extracts web pages. It downloads web pages from the World Wide Web for search engines and is an important component of search engines. But when web crawlers are abused, too many homogeneous things appear on the Internet, and originality cannot be protected. As a result, many websites began to fight against web crawlers and tried every means to protect their content.
1: User-Agent Referer detection (recommended learning: Python video tutorial)
User- Agent is a field in the HTTP protocol, and its role is to describe some information about the terminal that issues the HTTP request.
Enables the server to identify the operating system and version, CPU type, browser and version, browser rendering engine, browser language, browser plug-in, etc. used by the customer.
The server can know who is visiting the website through this field. Block users who are not normal browsers.
Solution:
Disguise the User-Agent of the browser, because the User-Agent of each browser is different, and all users can Use a browser. All UA detection can be solved by conditioning the browser's User-Agent on each request.
Referer is part of the header. When the browser sends a request to the web server, it usually brings the Referer and tells the server Which page did I link to from? For example, some picture websites will detect your Referer value when you request a picture. If the Referer does not match, normal pictures will not be returned.
Solution:
In the request to detect the referer, carry the matching referer value.
2: js obfuscation and rendering
The so-called JavaScript obfuscation is basically:
1. Remove some things that are not actually called The function.
2. Merge scattered variable declarations.
3. Simplification of logical functions.
4. Simplification of variable names. It depends on the pros and cons of different compression tools. Common tools include UglifyJS, JScrambler and other tools.
js rendering is actually the modification of the HTML page. For example, some web pages themselves do not return data. The data is added to HTML after js loading. When encountering this situation, we need to know that the crawler will not perform JavaScript operations. So it needs to be dealt with in other ways.
Solution:
1. Find the key code by reading the website js source code and implement it in python.
2. Find the key code by reading the website js source code, and use PyV8, execjs and other libraries to directly execute the js code.
3. Directly simulate the browser environment through the selenium library
3: IP restriction frequency
WEB systems all use the http protocol to connect to the WEB container Yes, each request will generate at least one TCP connection between the client and the server.
For the server, you can clearly see the requests initiated by an IP address within the unit time.
When the number of requests exceeds a certain value, it can be determined as an abnormal user request.
Solution:
1. Design the IP proxy pool by yourself, and carry a different proxy address with each request through rotation.
2. ADSL dynamic dialing has a unique feature. Every time you dial a number, you get a new IP. That is, its IP is not fixed.
Four: Verification code
Verification code (CAPTCHA) is a "Completely Automated PublicTuring test to tell Computers and HumansApart" ) is a public, fully automated program that distinguishes whether the user is a computer or a human.
It can prevent: malicious cracking of passwords, ticket fraud, forum flooding, and effectively prevents a hacker from making continuous login attempts on a specific registered user using a specific program to violently crack.
This question can be generated and judged by a computer, but only a human can answer it. Since computers cannot answer CAPTCHA questions, the user who answers the questions can be considered a human.
Solution:
1. Manually identify verification codes
2.pytesseract identifies simple verification codes
3.Docking Coding platform
4. Machine learning
For more Python-related technical articles, please visit the Python Tutorial column to learn!
The above is the detailed content of How to do anti-crawling in python. For more information, please follow other related articles on the PHP Chinese website!

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于Seaborn的相关问题,包括了数据可视化处理的散点图、折线图、条形图等等内容,下面一起来看一下,希望对大家有帮助。

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于进程池与进程锁的相关问题,包括进程池的创建模块,进程池函数等等内容,下面一起来看一下,希望对大家有帮助。

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于简历筛选的相关问题,包括了定义 ReadDoc 类用以读取 word 文件以及定义 search_word 函数用以筛选的相关内容,下面一起来看一下,希望对大家有帮助。

VS Code的确是一款非常热门、有强大用户基础的一款开发工具。本文给大家介绍一下10款高效、好用的插件,能够让原本单薄的VS Code如虎添翼,开发效率顿时提升到一个新的阶段。

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于数据类型之字符串、数字的相关问题,下面一起来看一下,希望对大家有帮助。

pythn的中文意思是巨蟒、蟒蛇。1989年圣诞节期间,Guido van Rossum在家闲的没事干,为了跟朋友庆祝圣诞节,决定发明一种全新的脚本语言。他很喜欢一个肥皂剧叫Monty Python,所以便把这门语言叫做python。

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于numpy模块的相关问题,Numpy是Numerical Python extensions的缩写,字面意思是Python数值计算扩展,下面一起来看一下,希望对大家有帮助。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

SublimeText3 Linux new version
SublimeText3 Linux latest version

SublimeText3 Chinese version
Chinese version, very easy to use

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Mac version
God-level code editing software (SublimeText3)
