Search engine spider refers to: web crawler. A web crawler is a program or script that automatically crawls information from the World Wide Web according to certain rules. Each search engine has its own spider. Search engine spiders are also called search engine crawlers and search engine robots.
What is a search engine spider?
Search engine spiders are referred to as web crawlers. Web crawlers (also known as web spiders, web robots, and more commonly known as web page chasers in the FOAF community) are a type of A program or script that automatically captures World Wide Web information according to certain rules.
We can understand the Internet as a huge "spider web", and search engine spiders are similar "robots" in nature. The main task of spiders is to browse information in the huge spider web (Internet), then crawl all the information to the search engine server, and then build an index database. It's like a robot browsing our website and then saving the content to its own computer.
Each search engine has its own spider. Search engine spiders are also called search engine crawlers and search engine robots.
The names of major domestic search engine spiders:
Baidu: Baidu spider
Google: googlebot
Sogou: sogou spider
Soso: Sosospider
360 Search: 360Spider
Youdao: YodaoBot
Yahoo: Yahoo Slurp
Bing :msnbot
Msn:msnbot
The above is the detailed content of What is a search engine spider. For more information, please follow other related articles on the PHP Chinese website!