search
HomeBackend DevelopmentPython TutorialHow to do a good web crawler?

How to do a good web crawler?

Jun 20, 2017 pm 04:23 PM
howreptilenetwork

The essence of web crawlers is actually to "steal" data from the Internet. Through web crawlers, we can collect the resources we need, but similarly, improper use may also cause some serious problems.

Therefore, when using web crawlers, we need to be able to "steal in the right way".

Web crawlers are mainly divided into the following three categories:

1. Small scale, small amount of data, and insensitive crawling speed; for this We can use the Requests library to implement web crawlers, which are mainly used to crawl web pages;

2. Medium scale, with large data scale and sensitive crawling speed; for this type of web crawler, we can use the Scrapy library. Implementation, mainly used to crawl websites or series of websites;

3. Large scale, search engine, crawling speed is key; at this time, customized development is required, mainly used to crawl the entire network, usually to build the entire network Search engines, such as Baidu, Google search, etc.

Among these three types, the first one is the most common, and most of them are small-scale crawlers that crawl web pages.

There are also many objections to web crawlers. Because web crawlers will constantly send requests to the server, affecting server performance, causing harassment to the server, and increasing the workload of website maintainers.

In addition to harassment of servers, web crawlers may also cause legal risks. Because the data on the server has property rights, if the data is used for profit, it will bring legal risks.

In addition, web crawlers may also cause user privacy leaks.

In short, the risks of web crawlers are mainly attributed to the following three points:

  • Performance of the server Harassment

  • Legal risks at the content level

  • Leakage of personal privacy

Therefore, web crawlers The use requires certain rules.

In actual situations, some larger websites have imposed relevant restrictions on web crawlers, and web crawlers are also regarded as a standardizable function on the entire Internet.

For general servers, we can limit web crawlers in two ways:

1. If the owner of the website has Certain technical capabilities to limit web crawlers through source review.

Source review is generally restricted by judging User-Agent. This article focuses on the second type.

2. Use the Robots protocol to tell web crawlers the rules they need to abide by, which ones can be crawled, and which ones are not allowed, and require all crawlers to comply with this protocol.

The second method is to inform in the form of an announcement. The Robots Agreement is a recommendation but not binding. Web crawlers may not comply, but there may be legal risks. Through these two methods, effective moral and technical restrictions on web crawlers are formed on the Internet.

Then, When we write a web crawler, we need to respect the management of website resources by the website maintainers.

On the Internet, some websites do not have the Robots protocol, and all data can be crawled; however, the vast majority of mainstream websites support the Robots protocol and have relevant restrictions. The following is a detailed introduction to the Robots protocol. basic syntax.

Robots protocol (Robots Exclusion Standard, web crawler exclusion standard):

Function: The website tells web crawlers which pages can be crawled and which no.

Format: robots.txt file in the root directory of the website.

Basic syntax of Robots protocol: * represents all, / represents the root directory.

For example, PMCAFF's Robots protocol:

User-agent: *

Disallow: /article/edit

Disallow: /discuss/write

Disallow: /discuss/edit

In line 1 User-agent:* means that all web crawlers need to abide by the following protocols;

Disallow: /article/edit in line 2 means that all web crawlers are not allowed to access articles under article/edit Content, the same applies to others.

If you observe Jingdong’s Robots protocol, you can see that there is User-agent: EtaoSpider, Disallow: /, where EtaoSpider is a malicious crawler and is not allowed to crawl any resources of Jingdong.

User-agent: *

Disallow: /?*

Disallow: /pop /*.html

Disallow: /pinpai/*.html?*

User-agent: EtaoSpider

Disallow: /

User-agent: HuihuiSpider

Disallow: /

User-agent: GwdangSpider

Disallow: /

User-agent: WochachaSpider

Disallow: /

With the Robots protocol, you can regulate the content of the website and tell all web crawlers which ones can be crawled and which ones are not allowed.

It is important to note that the Robots protocol exists in the root directory. Different root directories may have different Robots protocols, so you need to pay more attention when crawling.

The above is the detailed content of How to do a good web crawler?. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Python and Time: Making the Most of Your Study TimePython and Time: Making the Most of Your Study TimeApr 14, 2025 am 12:02 AM

To maximize the efficiency of learning Python in a limited time, you can use Python's datetime, time, and schedule modules. 1. The datetime module is used to record and plan learning time. 2. The time module helps to set study and rest time. 3. The schedule module automatically arranges weekly learning tasks.

Python: Games, GUIs, and MorePython: Games, GUIs, and MoreApr 13, 2025 am 12:14 AM

Python excels in gaming and GUI development. 1) Game development uses Pygame, providing drawing, audio and other functions, which are suitable for creating 2D games. 2) GUI development can choose Tkinter or PyQt. Tkinter is simple and easy to use, PyQt has rich functions and is suitable for professional development.

Python vs. C  : Applications and Use Cases ComparedPython vs. C : Applications and Use Cases ComparedApr 12, 2025 am 12:01 AM

Python is suitable for data science, web development and automation tasks, while C is suitable for system programming, game development and embedded systems. Python is known for its simplicity and powerful ecosystem, while C is known for its high performance and underlying control capabilities.

The 2-Hour Python Plan: A Realistic ApproachThe 2-Hour Python Plan: A Realistic ApproachApr 11, 2025 am 12:04 AM

You can learn basic programming concepts and skills of Python within 2 hours. 1. Learn variables and data types, 2. Master control flow (conditional statements and loops), 3. Understand the definition and use of functions, 4. Quickly get started with Python programming through simple examples and code snippets.

Python: Exploring Its Primary ApplicationsPython: Exploring Its Primary ApplicationsApr 10, 2025 am 09:41 AM

Python is widely used in the fields of web development, data science, machine learning, automation and scripting. 1) In web development, Django and Flask frameworks simplify the development process. 2) In the fields of data science and machine learning, NumPy, Pandas, Scikit-learn and TensorFlow libraries provide strong support. 3) In terms of automation and scripting, Python is suitable for tasks such as automated testing and system management.

How Much Python Can You Learn in 2 Hours?How Much Python Can You Learn in 2 Hours?Apr 09, 2025 pm 04:33 PM

You can learn the basics of Python within two hours. 1. Learn variables and data types, 2. Master control structures such as if statements and loops, 3. Understand the definition and use of functions. These will help you start writing simple Python programs.

How to teach computer novice programming basics in project and problem-driven methods within 10 hours?How to teach computer novice programming basics in project and problem-driven methods within 10 hours?Apr 02, 2025 am 07:18 AM

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading?How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading?Apr 02, 2025 am 07:15 AM

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment