When we browse the Internet every day, we often see some good-looking pictures, and we want to save and download these pictures, or use them as desktop wallpapers, or as design materials. The following article will introduce to you the relevant information about using python to implement the simplest web crawler. Friends in need can refer to it. Let's take a look together.
Preface
Web crawlers (also known as web spiders, web robots, among the FOAF community, more often called web crawlers ) is a program or script that automatically captures World Wide Web information according to certain rules. Recently, I have become very interested in python crawlers. I would like to share my learning path here and welcome your suggestions. We communicate with each other and make progress together. Not much to say, let’s take a look at the detailed introduction:
1. Development tools
The author uses The best tool is sublime text3. Its short and concise (maybe men don't like this word) fascinates me very much. It is recommended for everyone to use. Of course, if your computer configuration is good, pycharm may be more suitable for you.
sublime text3 builds a python development environment. It is recommended to view this article:
[sublime builds a python development environment][http://www.jb51.net/article/ 51838.htm]
##2. Introduction to crawlers
## As the name suggests, crawlers crawl on the Internet like bugs. Zhang Dawang. In this way, we can get what we want.
Since we want to crawl on the Internet, we need to understand the URL, the legal name "Uniform Resource Locator", and the nickname "Link". Its structure mainly consists of three parts:
(1) Protocol: such as the HTTP protocol we commonly see in URLs.
(2) Domain name or IP address: Domain name, such as: www.baidu.com, IP address, that is, the corresponding IP after domain name resolution.
(3) Path: directory or file, etc.
3. urllib develops the simplest crawler
Introduce | |
---|---|
urllib.parse | |
urllib.request | |
urllib.response | |
urllib.robotparser | |
The Baidu homepage is simple and elegant, which is very suitable for our crawlers.
The crawler code is as follows:
from urllib import request def visit_baidu(): URL = "http://www.baidu.com" # open the URL req = request.urlopen(URL) # read the URL html = req.read() # decode the URL to utf-8 html = html.decode("utf_8") print(html) if __name__ == '__main__': visit_baidu()
The result is as shown below:
We can compare with our running results by right-clicking on the blank space of Baidu homepage and viewing the review elements.
Of course, request can also generate a request object, which can be opened using the urlopen method.
The code is as follows:
from urllib import request def vists_baidu(): # create a request obkect req = request.Request('http://www.baidu.com') # open the request object response = request.urlopen(req) # read the response html = response.read() html = html.decode('utf-8') print(html) if __name__ == '__main__': vists_baidu()
The running result is the same as before.
(3) Error handling
Error handling is handled through the urllib module, mainly including URLError and HTTPError errors, of which HTTPError error is URLError error Subclasses of HTTRPError can also be caught by URLError.
HTTPError can be captured through its code attribute.
The code for handling HTTPError is as follows:
##
from urllib import request from urllib import error def Err(): url = "https://segmentfault.com/zzz" req = request.Request(url) try: response = request.urlopen(req) html = response.read().decode("utf-8") print(html) except error.HTTPError as e: print(e.code) if __name__ == '__main__': Err()The running result is as shown in the figure:
URLError can be captured through its reason attribute.
##
from urllib import request from urllib import error def Err(): url = "https://segmentf.com/" req = request.Request(url) try: response = request.urlopen(req) html = response.read().decode("utf-8") print(html) except error.URLError as e: print(e.reason) if __name__ == '__main__': Err()
The running result is as shown in the figure:
The code is as follows:
from urllib import request from urllib import error # 第一种方法,URLErroe和HTTPError def Err(): url = "https://segmentfault.com/zzz" req = request.Request(url) try: response = request.urlopen(req) html = response.read().decode("utf-8") print(html) except error.HTTPError as e: print(e.code) except error.URLError as e: print(e.reason)
The above is the detailed content of python web crawler tutorial. For more information, please follow other related articles on the PHP Chinese website!

Pythonusesahybridapproach,combiningcompilationtobytecodeandinterpretation.1)Codeiscompiledtoplatform-independentbytecode.2)BytecodeisinterpretedbythePythonVirtualMachine,enhancingefficiencyandportability.

ThekeydifferencesbetweenPython's"for"and"while"loopsare:1)"For"loopsareidealforiteratingoversequencesorknowniterations,while2)"while"loopsarebetterforcontinuinguntilaconditionismetwithoutpredefinediterations.Un

In Python, you can connect lists and manage duplicate elements through a variety of methods: 1) Use operators or extend() to retain all duplicate elements; 2) Convert to sets and then return to lists to remove all duplicate elements, but the original order will be lost; 3) Use loops or list comprehensions to combine sets to remove duplicate elements and maintain the original order.

ThefastestmethodforlistconcatenationinPythondependsonlistsize:1)Forsmalllists,the operatorisefficient.2)Forlargerlists,list.extend()orlistcomprehensionisfaster,withextend()beingmorememory-efficientbymodifyinglistsin-place.

ToinsertelementsintoaPythonlist,useappend()toaddtotheend,insert()foraspecificposition,andextend()formultipleelements.1)Useappend()foraddingsingleitemstotheend.2)Useinsert()toaddataspecificindex,thoughit'sslowerforlargelists.3)Useextend()toaddmultiple

Pythonlistsareimplementedasdynamicarrays,notlinkedlists.1)Theyarestoredincontiguousmemoryblocks,whichmayrequirereallocationwhenappendingitems,impactingperformance.2)Linkedlistswouldofferefficientinsertions/deletionsbutslowerindexedaccess,leadingPytho

Pythonoffersfourmainmethodstoremoveelementsfromalist:1)remove(value)removesthefirstoccurrenceofavalue,2)pop(index)removesandreturnsanelementataspecifiedindex,3)delstatementremoveselementsbyindexorslice,and4)clear()removesallitemsfromthelist.Eachmetho

Toresolvea"Permissiondenied"errorwhenrunningascript,followthesesteps:1)Checkandadjustthescript'spermissionsusingchmod xmyscript.shtomakeitexecutable.2)Ensurethescriptislocatedinadirectorywhereyouhavewritepermissions,suchasyourhomedirectory.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Dreamweaver CS6
Visual web development tools

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Notepad++7.3.1
Easy-to-use and free code editor

Zend Studio 13.0.1
Powerful PHP integrated development environment
