Home > Article > Backend Development > What python crawlers can do
80% of the world's crawlers are developed based on Python. Learning crawler skills well can provide important data sources for subsequent big data analysis, mining, machine learning, etc.
What is a crawler? (Recommended learning: Python video tutorial)
Web crawler (also known as web spider, web robot, among the FOAF community, more often called web page Chaser) is a program or script that automatically captures World Wide Web information according to certain rules. Other less commonly used names include ants, autoindexers, emulators, or worms.
In fact, in layman's terms, it is to obtain the data you want on the web page through a program, that is, to automatically crawl the data
What can a crawler do?
You can use a crawler to crawl pictures, crawl videos, and other data you want to crawl. As long as you can access the data through the browser, you can obtain it through the crawler.
What is the essence of a crawler?
Simulate the browser to open the web page and obtain the part of the data we want in the web page
The process of the browser opening the web page:
When you enter the address in the browser, you find the server host through the DNS server and send a request to the server. The server parses and sends the results to the user's browser, including html, js, css and other file contents, which are parsed by the browser and finally presented. Give the results that the user sees on the browser
So the results of the browser that the user sees are composed of HTML code. Our crawler is to obtain this content by analyzing and filtering the HTML code to obtain us. Want resources.
For more Python related technical articles, please visit the Python Tutorial column to learn!
The above is the detailed content of What python crawlers can do. For more information, please follow other related articles on the PHP Chinese website!