This article brings you relevant knowledge about Python, which mainly introduces relevant knowledge about crawlers. Simply put, crawlers are a name for the process of using programs to obtain data on the Internet. Let’s take a look at it together. I hope it helps everyone.
What is a crawler?
A crawler is simply a name for the process of using a program to obtain data on the Internet.
Principle of crawler
If we want to obtain data on the network, we need to give the crawler a website address (usually called URL in the program), the crawler sends an HTTP request to the server of the target web page, and the server returns the data To the client (that is, our crawler), the crawler then performs a series of operations such as data parsing and saving.
Process
Crawlers can save us time. For example, if I want to get the top 250 Douban movies, if we don’t use crawlers, we must first enter the URL of Douban movies on the browser, and the client ( The browser) finds the IP address of the server of the Douban Movie web page through analysis, and then establishes a connection with it. The browser creates an HTTP request and sends it to the Douban Movie server. After the server receives the request, it extracts the Top250 list from the database. , encapsulate it into an HTTP response, and then return the response result to the browser. The browser displays the response content and we see the data. Our crawler is also based on this process, but it is changed into code form.
HTTP request
HTTP request consists of request line, request header, blank line, and request body.
The request line consists of three parts:
1. Request method, common request methods are GET, POST, PUT, DELETE, HEAD
2. The resource path that the client wants to obtain
3. It is the HTTP protocol version number used by the client
The request header is a supplementary description of the request sent by the client to the server, such as the identity of the visitor, which will be discussed below .
The request body is the data submitted by the client to the server, such as the account and password information that needs to be improved when the user logs in. The request header and request body are separated by blank lines. The request body is not included in all requests. For example, general GET does not have a request body.
The picture above is the HTTP POST request sent to the server when the browser logs into Douban. The username and password are specified in the request body.
HTTP response
HTTP response format is very similar to the request format, and also consists of response lines, response headers, blank lines, and response bodies.
The response line also contains three parts, namely the HTTP version number of the server, the response status code and the status description.
There is a table of status codes here, corresponding to the meaning of each status code
cmd run: pip install requests to install requests.Then enter import requests on IDLE or a compiler (personally recommend VS Code or Pycharm) to run. If no error is reported, the installation is successful. The method to install most libraries is: pip install xxx (name of library)
Methods of requests
Construct a request and support the basic methods of each method | |
The main method to obtain HTML web pages, corresponding to HTTP's GET | |
requests.head() | Method to obtain HTML web page header information, corresponding to HTTP HEAD |
To The method of submitting a POST request to an HTML web page, corresponding to HTTP's POST | |
The method of submitting a PUT request to an HTML web page, corresponding to HTTP's PUT | |
Submit a partial modification request to the HTML web page, corresponding to HTTP's PATCT | |
Submit a delete request to an HTML web page, corresponding to HTTP's DELETE |
The return status of the HTTP request, 200 indicates successful connection, 404 indicates failure | |
The string form of the HTTP response content, that is, the page content corresponding to the url | |
The response content encoding method guessed from the HTTP header (if the header If charset does not exist, the encoding is considered to be ISO-8859-1) | |
The response content encoding method analyzed from the content (alternative encoding method ) | |
The binary form of the HTTP response content |
Network connection error exception, such as DNS query failure, connection refused, etc. | |
HTTP error exception | |
URL missing exception | |
Exceeds the maximum number of redirects and generates Redirect exception | |
Timeout exception when connecting to the remote server | |
The request URL times out, resulting in a timeout exception |
I will first put the project structure of a small crawler project I made. The complete source code can be downloaded by private chatting with me.
import requests def English_Chinese(): url = "https://fanyi.baidu.com/sug" s = input("请输入要翻译的词(中/英):") dat = { "kw":s } resp = requests.post(url,data = dat)# 发送post请求 ch = resp.json() # 将服务器返回的内容直接处理成json => dict resp.close() dic_lenth = len(ch['data']) for i in range(dic_lenth): print("词:"+ch['data'][i]['k']+" "+"单词意思:"+ch['data'][i]['v'])Detailed code explanation: Import the requests module and set the url to the URL of the Baidu translation web page.
[ {xx:xx} , {xx:xx} , {xx:xx} , {xx:xx} ] }
The place I marked in red is information we need. Suppose there are n dictionaries in the list marked blue, we can get the value of n through the len() function, and use a for loop to traverse to get the result.dic_lenth = len(ch['data'] for i in range(dic_lenth): print("词:"+ch['data'][i]['k']+" "+"单词意思:"+ch['data'][i]['v'])FinallyOkay, that’s it for today’s sharing, bye~Hey? I forgot one thing, let me give you another code to crawl the weather!
# -*- coding:utf-8 -*- import requests import bs4 def get_web(url): header = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36 Edg/91.0.864.59"} res = requests.get(url, headers=header, timeout=5) # print(res.encoding) content = res.text.encode('ISO-8859-1') return content def parse_content(content): soup = bs4.BeautifulSoup(content, 'lxml') ''' 存放天气情况 ''' list_weather = [] weather_list = soup.find_all('p', class_='wea') for i in weather_list: list_weather.append(i.text) ''' 存放日期 ''' list_day = [] i = 0 day_list = soup.find_all('h1') for each in day_list: if i 0: list_tem.append([each.span.text, each.i.text]) i += 1 # print(list_tem) ''' 存放风力 ''' list_wind = [] wind_list = soup.find_all('p', class_='win') for each in wind_list: list_wind.append(each.i.text.strip()) # print(list_wind) return list_day, list_weather, list_tem, list_wind def get_content(url): content = get_web(url) day, weather, tem, wind = parse_content(content) item = 0 for i in range(0, 7): if item == 0: print(day[i]+':\t') print(weather[i]+'\t') print("今日气温:"+tem[i]+'\t') print("风力:"+wind[i]+'\t') print('\n') item += 1 elif item > 0: print(day[i]+':\t') print(weather[i] + '\t') print("最高气温:"+tem[i][0]+'\t') print("最低气温:"+tem[i][1] + '\t') print("风力:"+wind[i]+'\t') print('\n')
Python3 video tutorial】
The above is the detailed content of Understand Python crawler in one article. For more information, please follow other related articles on the PHP Chinese website!

TomergelistsinPython,youcanusethe operator,extendmethod,listcomprehension,oritertools.chain,eachwithspecificadvantages:1)The operatorissimplebutlessefficientforlargelists;2)extendismemory-efficientbutmodifiestheoriginallist;3)listcomprehensionoffersf

In Python 3, two lists can be connected through a variety of methods: 1) Use operator, which is suitable for small lists, but is inefficient for large lists; 2) Use extend method, which is suitable for large lists, with high memory efficiency, but will modify the original list; 3) Use * operator, which is suitable for merging multiple lists, without modifying the original list; 4) Use itertools.chain, which is suitable for large data sets, with high memory efficiency.

Using the join() method is the most efficient way to connect strings from lists in Python. 1) Use the join() method to be efficient and easy to read. 2) The cycle uses operators inefficiently for large lists. 3) The combination of list comprehension and join() is suitable for scenarios that require conversion. 4) The reduce() method is suitable for other types of reductions, but is inefficient for string concatenation. The complete sentence ends.

PythonexecutionistheprocessoftransformingPythoncodeintoexecutableinstructions.1)Theinterpreterreadsthecode,convertingitintobytecode,whichthePythonVirtualMachine(PVM)executes.2)TheGlobalInterpreterLock(GIL)managesthreadexecution,potentiallylimitingmul

Key features of Python include: 1. The syntax is concise and easy to understand, suitable for beginners; 2. Dynamic type system, improving development speed; 3. Rich standard library, supporting multiple tasks; 4. Strong community and ecosystem, providing extensive support; 5. Interpretation, suitable for scripting and rapid prototyping; 6. Multi-paradigm support, suitable for various programming styles.

Python is an interpreted language, but it also includes the compilation process. 1) Python code is first compiled into bytecode. 2) Bytecode is interpreted and executed by Python virtual machine. 3) This hybrid mechanism makes Python both flexible and efficient, but not as fast as a fully compiled language.

Useaforloopwheniteratingoverasequenceorforaspecificnumberoftimes;useawhileloopwhencontinuinguntilaconditionismet.Forloopsareidealforknownsequences,whilewhileloopssuitsituationswithundeterminediterations.

Pythonloopscanleadtoerrorslikeinfiniteloops,modifyinglistsduringiteration,off-by-oneerrors,zero-indexingissues,andnestedloopinefficiencies.Toavoidthese:1)Use'i


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SublimeText3 Chinese version
Chinese version, very easy to use

WebStorm Mac version
Useful JavaScript development tools

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver Mac version
Visual web development tools
