Home >Backend Development >Python Tutorial >What does python crawler mean?
Python crawler refers to Python web crawler, also known as web spider and web robot. It is a program or script that automatically captures World Wide Web information according to certain rules. Others are not commonly used. Also known as ants, autoindexers, emulators, or worms.
Simply put, the Internet is a large network composed of sites and network devices. We access the site through a browser, and the site puts HTML, JS, and CSS codes Returned to the browser, these codes are parsed and rendered by the browser to present rich and colorful web pages before our eyes
If we compare the Internet to a big spider Web, data is stored in each node of the spider web, and the Python crawler is a small spider,
crawls its own prey (data) along the network. The crawler refers to: making a request to the website, obtaining A program that analyzes and extracts useful data after resources;
From a technical perspective, it simulates the behavior of a browser requesting a site through a program, and crawls the HTML code/JSON data/binary data (pictures, videos) returned by the site. Go to the local area, extract the data you need, and store it using
Basic principles of Python crawlers
1. Initiate a request
Use the http library to initiate a request to the target site, that is, send a Request
Request includes: request header, request body, etc.
Request module defect: JS and CSS code cannot be executed
2. Get the response content
If the server can respond normally, you will get a Response
Response includes: html, json, pictures, videos, etc.
3. Parse content
Parse html data: regular expression (RE module), third-party parsing libraries such as Beautifulsoup, pyquery, etc.
Parse json data: json module
parse Binary data: write to file in wb format
4. Save data
Database (MySQL, Mongdb, Redis)
The above is the detailed content of What does python crawler mean?. For more information, please follow other related articles on the PHP Chinese website!