When we browse the Internet every day, we often see some good-looking pictures, and we want to save and download these pictures, or use them as desktop wallpapers, or as design materials. The following article will introduce to you the relevant information about using python to implement the simplest web crawler. Friends in need can refer to it. Let's take a look together.
Preface
Web crawlers (also known as web spiders, web robots, among the FOAF community, more often called web crawlers ) is a program or script that automatically captures World Wide Web information according to certain rules. Recently, I have become very interested in python crawlers. I would like to share my learning path here and welcome your suggestions. We communicate with each other and make progress together. Not much to say, let’s take a look at the detailed introduction:
1. Development tools
The author uses The best tool is sublime text3. Its short and concise (maybe men don't like this word) fascinates me very much. It is recommended for everyone to use. Of course, if your computer configuration is good, pycharm may be more suitable for you.
sublime text3 To build a python development environment, it is recommended to view this article:
[sublime to build a python development environment]
As the name suggests, crawlers are like bugs , crawling on the big web of the Internet. In this way, we can get what we want.
Since we want to crawl on the Internet, we need to understand the URL, the legal name "Uniform Resource Locator", and the nickname "Link". Its structure mainly consists of three parts:
(1) Protocol: such as the HTTP protocol we commonly see in URLs.
(2) Domain name or IP address: Domain name, such as: www.baidu.com, IP address, that is, the corresponding IP after domain name resolution.
(3) Path: directory or file, etc.
3. urllib develops the simplest crawler
(1) Introduction to urllib
Module | Introduce |
---|---|
Exception classes raised by urllib.request. | |
Parse URLs into or assemble them from components. | |
Extensible library for opening URLs. | |
Response classes used by urllib. | |
Load a robots.txt file and answer questions about fetchability of other URLs. |
(2 ) Develop the simplest crawler
from urllib import request def visit_baidu(): URL = "http://www.baidu.com" # open the URL req = request.urlopen(URL) # read the URL html = req.read() # decode the URL to utf-8 html = html.decode("utf_8") print(html) if __name__ == '__main__': visit_baidu()The result is as shown below:
from urllib import request def vists_baidu(): # create a request obkect req = request.Request('http://www.baidu.com') # open the request object response = request.urlopen(req) # read the response html = response.read() html = html.decode('utf-8') print(html) if __name__ == '__main__': vists_baidu()The running result is the same as before.
(3) Error handling
##
from urllib import request from urllib import error def Err(): url = "https://segmentfault.com/zzz" req = request.Request(url) try: response = request.urlopen(req) html = response.read().decode("utf-8") print(html) except error.HTTPError as e: print(e.code) if __name__ == '__main__': Err()
The running result is as shown in the figure:
404 is the printed error code. You can Baidu for detailed information about this.
URLError can be captured through its reason attribute.
chuliHTTPError code is as follows:
from urllib import request from urllib import error def Err(): url = "https://segmentf.com/" req = request.Request(url) try: response = request.urlopen(req) html = response.read().decode("utf-8") print(html) except error.URLError as e: print(e.reason) if __name__ == '__main__': Err()
The running result is as shown in the figure:
Since in order to handle errors, it is best to write both errors into the code. After all, the more detailed the code, the clearer it will be. It should be noted that HTTPError is a subclass of URLError, so HTTPError must be placed in front of URLError, otherwise URLError will be output, such as 404 as Not Found.
The code is as follows:
from urllib import request from urllib import error # 第一种方法,URLErroe和HTTPError def Err(): url = "https://segmentfault.com/zzz" req = request.Request(url) try: response = request.urlopen(req) html = response.read().decode("utf-8") print(html) except error.HTTPError as e: print(e.code) except error.URLError as e: print(e.reason)
You can change the url to view the output forms of various errors.
The above is the detailed content of The simplest web crawler tutorial in python. For more information, please follow other related articles on the PHP Chinese website!

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于Seaborn的相关问题,包括了数据可视化处理的散点图、折线图、条形图等等内容,下面一起来看一下,希望对大家有帮助。

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于进程池与进程锁的相关问题,包括进程池的创建模块,进程池函数等等内容,下面一起来看一下,希望对大家有帮助。

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于简历筛选的相关问题,包括了定义 ReadDoc 类用以读取 word 文件以及定义 search_word 函数用以筛选的相关内容,下面一起来看一下,希望对大家有帮助。

VS Code的确是一款非常热门、有强大用户基础的一款开发工具。本文给大家介绍一下10款高效、好用的插件,能够让原本单薄的VS Code如虎添翼,开发效率顿时提升到一个新的阶段。

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于数据类型之字符串、数字的相关问题,下面一起来看一下,希望对大家有帮助。

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于numpy模块的相关问题,Numpy是Numerical Python extensions的缩写,字面意思是Python数值计算扩展,下面一起来看一下,希望对大家有帮助。

pythn的中文意思是巨蟒、蟒蛇。1989年圣诞节期间,Guido van Rossum在家闲的没事干,为了跟朋友庆祝圣诞节,决定发明一种全新的脚本语言。他很喜欢一个肥皂剧叫Monty Python,所以便把这门语言叫做python。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SublimeText3 English version
Recommended: Win version, supports code prompts!

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

Zend Studio 13.0.1
Powerful PHP integrated development environment

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),
