Home > Article > Backend Development > How to do a good web crawler?
The essence of web crawlers is actually to "steal" data from the Internet. Through web crawlers, we can collect the resources we need, but similarly, improper use may also cause some serious problems.
Therefore, when using web crawlers, we need to be able to "steal in the right way".
Web crawlers are mainly divided into the following three categories:
1. Small scale, small amount of data, and insensitive crawling speed; for this We can use the Requests library to implement web crawlers, which are mainly used to crawl web pages;
2. Medium scale, with large data scale and sensitive crawling speed; for this type of web crawler, we can use the Scrapy library. Implementation, mainly used to crawl websites or series of websites;
3. Large scale, search engine, crawling speed is key; at this time, customized development is required, mainly used to crawl the entire network, usually to build the entire network Search engines, such as Baidu, Google search, etc.
Among these three types, the first one is the most common, and most of them are small-scale crawlers that crawl web pages.
There are also many objections to web crawlers. Because web crawlers will constantly send requests to the server, affecting server performance, causing harassment to the server, and increasing the workload of website maintainers.
In addition to harassment of servers, web crawlers may also cause legal risks. Because the data on the server has property rights, if the data is used for profit, it will bring legal risks.
In addition, web crawlers may also cause user privacy leaks.
In short, the risks of web crawlers are mainly attributed to the following three points:
Performance of the server Harassment
Legal risks at the content level
Leakage of personal privacy
Therefore, web crawlers The use requires certain rules.
In actual situations, some larger websites have imposed relevant restrictions on web crawlers, and web crawlers are also regarded as a standardizable function on the entire Internet.
For general servers, we can limit web crawlers in two ways:
1. If the owner of the website has Certain technical capabilities to limit web crawlers through source review.
Source review is generally restricted by judging User-Agent. This article focuses on the second type.
2. Use the Robots protocol to tell web crawlers the rules they need to abide by, which ones can be crawled, and which ones are not allowed, and require all crawlers to comply with this protocol.
The second method is to inform in the form of an announcement. The Robots Agreement is a recommendation but not binding. Web crawlers may not comply, but there may be legal risks. Through these two methods, effective moral and technical restrictions on web crawlers are formed on the Internet.
Then, When we write a web crawler, we need to respect the management of website resources by the website maintainers.
On the Internet, some websites do not have the Robots protocol, and all data can be crawled; however, the vast majority of mainstream websites support the Robots protocol and have relevant restrictions. The following is a detailed introduction to the Robots protocol. basic syntax.
Robots protocol (Robots Exclusion Standard, web crawler exclusion standard):
Function: The website tells web crawlers which pages can be crawled and which no.
Format: robots.txt file in the root directory of the website.
Basic syntax of Robots protocol: * represents all, / represents the root directory.
For example, PMCAFF's Robots protocol:
User-agent: *
Disallow: /article/edit
Disallow: /discuss/write
Disallow: /discuss/edit
In line 1 User-agent:* means that all web crawlers need to abide by the following protocols;
Disallow: /article/edit in line 2 means that all web crawlers are not allowed to access articles under article/edit Content, the same applies to others.
If you observe Jingdong’s Robots protocol, you can see that there is User-agent: EtaoSpider, Disallow: /, where EtaoSpider is a malicious crawler and is not allowed to crawl any resources of Jingdong.
User-agent: *
Disallow: /?*
Disallow: /pop /*.html
Disallow: /pinpai/*.html?*
User-agent: EtaoSpider
Disallow: /
User-agent: HuihuiSpider
Disallow: /
User-agent: GwdangSpider
Disallow: /
User-agent: WochachaSpider
Disallow: /
With the Robots protocol, you can regulate the content of the website and tell all web crawlers which ones can be crawled and which ones are not allowed.
It is important to note that the Robots protocol exists in the root directory. Different root directories may have different Robots protocols, so you need to pay more attention when crawling.
The above is the detailed content of How to do a good web crawler?. For more information, please follow other related articles on the PHP Chinese website!