Home >Backend Development >Python Tutorial >Introduction to the basic crawler process Request and Response
The crawler based on python wants to obtain data from the website, which is the process from request to response. We disguise the browser to send a Request request to the server, and the server will respond with a Response after accepting the information.
In the previous article we explainedWhat is a crawler and an introduction to the basic process of a crawler, today I will give What everyone brings is a detailed introduction to the basic process, what are Request and Response.
Request
1. What is Request?
#The browser sends information to the server where the URL is located. This process is called HTTP Request.
2.What is included in the Request?
Request method: The main types of request methods are GET and POST, as well as HEAD, PUT, DELETE, etc. The request parameters of the GET request will be displayed after the URL link. For example, if we open Baidu and search for "pictures", we will see that the requested URL link is https://www.baidu.com/s?wd=picture. The request parameters of the POST request will be stored in the Request and will not appear behind the URL link. For example, if we log in to Zhihu and enter the user name and password, we will see the Network page of the browser developer tools. The Request request has Form Data's key-value pair information stores our login information there, which helps protect the security of our account information; Request URL: The full name of URL is Uniform Resource Locator, which is what we call a URL. For example, a picture, a music file, a web document, etc. can be determined by a unique URL. The information it contains indicates the location of the file and how the browser should process it; Request Headers: When the request header contains the request Header information, such as User-Agent (specify the browser's request header), Host, Cookies and other information; Request body: The request body is the additional data carried by the request, such as the login information data submitted by the login form.
Response
1. What is Response?
After the server receives the information sent by the browser, it can process it accordingly based on the content of the information sent by the browser, and then send the message back to the browser. This process is called HTTP Response.
2.What is included in the Response?
Response status: There are many response statuses, such as 200 for success, 301 for jump page, 404 for page not found, 502 for server error; Response Headers: such as content type , content length, server information, cookie settings, etc.; Response body: The most important part of the response body, including the content of the requested resource, such as web page HTML code, image binary data, etc.
Simple demonstration
import requests # 导入requests库,需要安装 # 模拟成浏览器访问的头 headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'} resp = requests.get('https://www.baidu.com',headers=headers) print(resp.text) # 打印出网页源代码 print(resp.status_code) # 打印出状态码
After running successfully, you can see the printed html source code and 200 status code. This basically implements the crawler's Request and Response process.
The above is the detailed content of Introduction to the basic crawler process Request and Response. For more information, please follow other related articles on the PHP Chinese website!