Home > Article > Backend Development > Analysis of the working principle of python crawler
1. How the crawler works
Web crawler, or Web Spider, is a very vivid name. If the Internet is compared to a spider web, then a spider is a spider crawling around on the web. Web spiders search for web pages through their link addresses. Starting from a certain page of the website (usually the homepage), read the content of the webpage, find other link addresses in the webpage, and then find the next webpage through these link addresses. This cycle continues until all the webpages of this website are included. Until the fetch is finished. If the entire Internet is regarded as a website, then web spiders can use this principle to crawl all web pages on the Internet. In this way, a web crawler is a crawler, a program that crawls web pages. The basic operation of a web crawler is to crawl web pages. So how can you get the page you want exactly as you want? Let’s start with the URL.
The process of crawling web pages is actually the same as how readers usually use IE browser to browse web pages. For example, you enter the address www.baidu.com in the address bar of the browser. The process of opening a web page is actually that the browser, as a browsing "client", sends a request to the server, "grabs" the server-side files locally, and then interprets and displays them. HTML is a markup language that uses tags to mark content and parse and differentiate them. The function of the browser is to parse the obtained HTML code, and then convert the original code into the website page we see directly.
To put it simply, the URL is the string http://www.baidu.com entered on the browser. Before understanding URLs, you must first understand the concept of URIs.
What is a URI?
Every resource available on the Web, such as HTML documents, images, video clips, programs, etc., is located by a Universal Resource Identifier (URI).
URI usually consists of three parts:
The naming mechanism for accessing resources;
The host name where the resource is stored;
The name of the resource itself, consisting of the path express.
For example, the following URI: http://www.why.com.cn/myhtml/html1223/
This is a resource that can be accessed through the HTTP protocol,
Located on the host www.webmonkey.com.cn, accessed through the path "/html/html40".
2. URL understanding and examples
URL is a subset of URI. It is the abbreviation of Uniform Resource Locator, translated as "Uniform Resource Locator". In layman's terms, URL is a string describing information resources on the Internet, and is mainly used in various WWW client programs and server programs. URLs can be used to describe various information resources in a unified format, including files, server addresses and directories, etc. The general format of the URL is (the ones with square brackets [] are optional):
protocol :// hostname[:port] / path / [;parameters][?query]#fragment
The format of URL consists of three parts:
The first part is the protocol (or service method).
The second part is the IP address of the host where the resource is stored (sometimes including the port number).
The third part is the specific address of the host resource, such as directory and file name.
The first part and the second part are separated by the "://" symbol, and the second part and the third part are separated by the "/" symbol. The first and second parts are indispensable, and the third part can sometimes be omitted.
3. Simple comparison between URL and URI
URI is a lower-level abstraction of URL, a string text standard. In other words, URIs belong to the parent class, and URLs belong to the subclasses of URI. URL is a subset of URI. The definition of URI is: Uniform Resource Identifier; the definition of URL is: Uniform Resource Locator. The difference between the two is that URI represents the path to the request server and defines such a resource. The URL also describes how to access the resource (http://).
Let’s take a look at two small examples of URLs.
1. URL example of HTTP protocol:
Use Hypertext Transfer Protocol HTTP to provide resources for hypertext information services.
Example: http://www.peopledaily.com.cn/channel/welcome.htm
The computer domain name is www.peopledaily.com.cn.
The hypertext file (file type is .html) is welcome.htm in the directory /channel.
This is a computer from the People's Daily of China.
Example: http://www.rol.cn.NET/talk/talk1.htm
The computer domain name is www.rol.cn.Net.
The hypertext file (file type is .html) is talk1.htm in the directory/talk.
This is the address of Ruide Chat Room. You can enter the first room of Ruide Chat Room from here.
2. File URL
When using URL to represent a file, the server mode is represented by file, followed by the host IP address, file access path (i.e. directory), file name and other information.
Sometimes directory and file names can be omitted, but the "/" symbol cannot be omitted.
Example: file://ftp.yoyodyne.com/pub/files/foobar.txt
The above URL represents the pub/files/ directory stored on the host ftp.yoyodyne.com A file under, the file name is foobar.txt.
Example: file://ftp.yoyodyne.com/pub
represents the directory/pub on the host ftp.yoyodyne.com.
Example: file://ftp.yoyodyne.com/
represents the root directory of the host ftp.yoyodyne.com.
The main processing object of the crawler is the URL. It obtains the required file content based on the URL address, and then further processes it.
Therefore, accurately understanding URLs is crucial to understanding web crawlers.
The above is the entire content of this article. I hope that the content of this article can bring some help to everyone's study or work. I also hope to support the PHP Chinese website!
For more articles related to the analysis of the working principle of python crawlers, please pay attention to the PHP Chinese website!