search
HomeBackend DevelopmentPython Tutorialpython web crawler tutorial

python web crawler tutorial

Aug 15, 2017 pm 01:43 PM
pythonTutorialWeb page

When we browse the Internet every day, we often see some good-looking pictures, and we want to save and download these pictures, or use them as desktop wallpapers, or as design materials. The following article will introduce to you the relevant information about using python to implement the simplest web crawler. Friends in need can refer to it. Let's take a look together.

Preface

Web crawlers (also known as web spiders, web robots, among the FOAF community, more often called web crawlers ) is a program or script that automatically captures World Wide Web information according to certain rules. Recently, I have become very interested in python crawlers. I would like to share my learning path here and welcome your suggestions. We communicate with each other and make progress together. Not much to say, let’s take a look at the detailed introduction:

1. Development tools

The author uses The best tool is sublime text3. Its short and concise (maybe men don't like this word) fascinates me very much. It is recommended for everyone to use. Of course, if your computer configuration is good, pycharm may be more suitable for you.

sublime text3 builds a python development environment. It is recommended to view this article:

[sublime builds a python development environment][http://www.jb51.net/article/ 51838.htm]

##2. Introduction to crawlers
## As the name suggests, crawlers crawl on the Internet like bugs. Zhang Dawang. In this way, we can get what we want.


Since we want to crawl on the Internet, we need to understand the URL, the legal name "Uniform Resource Locator", and the nickname "Link". Its structure mainly consists of three parts:


(1) Protocol: such as the HTTP protocol we commonly see in URLs.


(2) Domain name or IP address: Domain name, such as: www.baidu.com, IP address, that is, the corresponding IP after domain name resolution.


(3) Path: directory or file, etc.


3. urllib develops the simplest crawler

(1) Introduction to urllib


Module##urllib.errorException classes raised by urllib.request.urllib.parseParse URLs into or assemble them from components.urllib.requestExtensible library for opening URLs.urllib.responseResponse classes used by urllib.urllib.robotparserLoad a robots.txt file and answer questions about fetchability of other URLs.(2 ) Develop the simplest crawler
Introduce

The Baidu homepage is simple and elegant, which is very suitable for our crawlers.

The crawler code is as follows:

from urllib import request

def visit_baidu():
 URL = "http://www.baidu.com"
 # open the URL
 req = request.urlopen(URL)
 # read the URL 
 html = req.read()
 # decode the URL to utf-8
 html = html.decode("utf_8")
 print(html)

if __name__ == '__main__':
 visit_baidu()

The result is as shown below:

We can compare with our running results by right-clicking on the blank space of Baidu homepage and viewing the review elements.

Of course, request can also generate a request object, which can be opened using the urlopen method.

The code is as follows:

from urllib import request

def vists_baidu():
 # create a request obkect
 req = request.Request('http://www.baidu.com')
 # open the request object
 response = request.urlopen(req)
 # read the response 
 html = response.read()
 html = html.decode('utf-8')
 print(html)

if __name__ == '__main__':
 vists_baidu()

The running result is the same as before.


(3) Error handling

Error handling is handled through the urllib module, mainly including URLError and HTTPError errors, of which HTTPError error is URLError error Subclasses of HTTRPError can also be caught by URLError.

HTTPError can be captured through its code attribute.

The code for handling HTTPError is as follows:


##

from urllib import request
from urllib import error

def Err():
 url = "https://segmentfault.com/zzz"
 req = request.Request(url)

 try:
 response = request.urlopen(req)
 html = response.read().decode("utf-8")
 print(html)
 except error.HTTPError as e:
 print(e.code)
if __name__ == '__main__':
 Err()

The running result is as shown in the figure:


404 is the printed error code. You can Baidu for detailed information about this.

URLError can be captured through its reason attribute.

chuliHTTPError code is as follows:


##

from urllib import request
from urllib import error

def Err():
 url = "https://segmentf.com/"
 req = request.Request(url)

 try:
 response = request.urlopen(req)
 html = response.read().decode("utf-8")
 print(html)
 except error.URLError as e:
 print(e.reason)
if __name__ == '__main__':
 Err()

The running result is as shown in the figure:


Since in order to handle errors, it is best to write both errors into the code. After all, the more detailed the code, the clearer it will be. It should be noted that HTTPError is a subclass of URLError, so HTTPError must be placed in front of URLError, otherwise URLError will be output, such as 404 as Not Found.


The code is as follows:


from urllib import request
from urllib import error

# 第一种方法,URLErroe和HTTPError
def Err():
 url = "https://segmentfault.com/zzz"
 req = request.Request(url)

 try:
 response = request.urlopen(req)
 html = response.read().decode("utf-8")
 print(html)
 except error.HTTPError as e:
 print(e.code)
 except error.URLError as e:
 print(e.reason)

The above is the detailed content of python web crawler tutorial. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Python's Hybrid Approach: Compilation and Interpretation CombinedPython's Hybrid Approach: Compilation and Interpretation CombinedMay 08, 2025 am 12:16 AM

Pythonusesahybridapproach,combiningcompilationtobytecodeandinterpretation.1)Codeiscompiledtoplatform-independentbytecode.2)BytecodeisinterpretedbythePythonVirtualMachine,enhancingefficiencyandportability.

Learn the Differences Between Python's 'for' and 'while' LoopsLearn the Differences Between Python's 'for' and 'while' LoopsMay 08, 2025 am 12:11 AM

ThekeydifferencesbetweenPython's"for"and"while"loopsare:1)"For"loopsareidealforiteratingoversequencesorknowniterations,while2)"while"loopsarebetterforcontinuinguntilaconditionismetwithoutpredefinediterations.Un

Python concatenate lists with duplicatesPython concatenate lists with duplicatesMay 08, 2025 am 12:09 AM

In Python, you can connect lists and manage duplicate elements through a variety of methods: 1) Use operators or extend() to retain all duplicate elements; 2) Convert to sets and then return to lists to remove all duplicate elements, but the original order will be lost; 3) Use loops or list comprehensions to combine sets to remove duplicate elements and maintain the original order.

Python List Concatenation Performance: Speed ComparisonPython List Concatenation Performance: Speed ComparisonMay 08, 2025 am 12:09 AM

ThefastestmethodforlistconcatenationinPythondependsonlistsize:1)Forsmalllists,the operatorisefficient.2)Forlargerlists,list.extend()orlistcomprehensionisfaster,withextend()beingmorememory-efficientbymodifyinglistsin-place.

How do you insert elements into a Python list?How do you insert elements into a Python list?May 08, 2025 am 12:07 AM

ToinsertelementsintoaPythonlist,useappend()toaddtotheend,insert()foraspecificposition,andextend()formultipleelements.1)Useappend()foraddingsingleitemstotheend.2)Useinsert()toaddataspecificindex,thoughit'sslowerforlargelists.3)Useextend()toaddmultiple

Are Python lists dynamic arrays or linked lists under the hood?Are Python lists dynamic arrays or linked lists under the hood?May 07, 2025 am 12:16 AM

Pythonlistsareimplementedasdynamicarrays,notlinkedlists.1)Theyarestoredincontiguousmemoryblocks,whichmayrequirereallocationwhenappendingitems,impactingperformance.2)Linkedlistswouldofferefficientinsertions/deletionsbutslowerindexedaccess,leadingPytho

How do you remove elements from a Python list?How do you remove elements from a Python list?May 07, 2025 am 12:15 AM

Pythonoffersfourmainmethodstoremoveelementsfromalist:1)remove(value)removesthefirstoccurrenceofavalue,2)pop(index)removesandreturnsanelementataspecifiedindex,3)delstatementremoveselementsbyindexorslice,and4)clear()removesallitemsfromthelist.Eachmetho

What should you check if you get a 'Permission denied' error when trying to run a script?What should you check if you get a 'Permission denied' error when trying to run a script?May 07, 2025 am 12:12 AM

Toresolvea"Permissiondenied"errorwhenrunningascript,followthesesteps:1)Checkandadjustthescript'spermissionsusingchmod xmyscript.shtomakeitexecutable.2)Ensurethescriptislocatedinadirectorywhereyouhavewritepermissions,suchasyourhomedirectory.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment