Basic concepts of crawler technology
Crawler: a program that automatically obtains network data.
Web page structure: HTML, CSS, JavaScript, etc.
HTTP request: The way the client requests data from the server.
HTTP response: Data returned by the server to the client.
Requests and Responses
Use Python's requests library to send HTTP requests.
import requests url = "https://www.example.com" response = requests.get(url)
Get response content
html_content = response.text
HTML parsing and data extraction
Use the BeautifulSoup library to parse HTML content.
from bs4 import BeautifulSoup soup = BeautifulSoup(html_content, "html.parser")
Use CSS selectors or other methods to extract data.
title = soup.title.string
Practical combat: Crawl the article information on the homepage of the Jianshu website
Send a request to obtain the HTML content of the Jianshu website's homepage.
import requests from bs4 import BeautifulSoup url = "https://www.jianshu.com" response = requests.get(url) html_content = response.text
Storing data
Store data in JSON format.
import json with open("jianshu_articles.json", "w", encoding="utf-8") as f: json.dump(article_info_list, f, ensure_ascii=False, indent=4)
Testing and Optimization
1. When encountering an anti-crawler strategy, you can use User-Agent to pretend to be a browser.
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"} response = requests.get(url, headers=headers)
2. Use the time.sleep() function to control the request frequency.
import time time.sleep(10)
3. Error handling and exception catching.
try: response = requests.get(url, headers=headers, timeout=5) response.raise_for_status() except requests.exceptions.RequestException as e: print(f"Error: {e}")
Complete code for website crawler:
import requests from bs4 import BeautifulSoup import json import time def fetch_jianshu_articles(): url = "https://www.jianshu.com" headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"} try: response = requests.get(url, headers=headers, timeout=5) response.raise_for_status() except requests.exceptions.RequestException as e: print(f"Error: {e}") return html_content = response.text soup = BeautifulSoup(html_content, "html.parser") articles = soup.find_all("div", class_="content") article_info_list = [] for article in articles: title = article.h3.text.strip() author = article.find("span", class_="name").text.strip() link = url + article.h3.a["href"] article_info = {"title": title, "author": author, "link": link} article_info_list.append(article_info) return article_info_list def save_to_json(article_info_list, filename): with open(filename, "w", encoding="utf-8") as f: json.dump(article_info_list, f, ensure_ascii=False, indent=4) if __name__ == "__main__": article_info_list = fetch_jianshu_articles() if article_info_list: save_to_json(article_info_list, "jianshu_articles.json") print("Jianshu articles saved to 'jianshu_articles.json'.") else: print("Failed to fetch Jianshu articles.")
Supplementary
In order to better understand this practical project, we need to understand some basic concepts and principles, which will help to master Python network programming and crawler technology. Here are some basic web crawling concepts:
HTTP Protocol: Hypertext Transfer Protocol (HTTP) is an application layer protocol used to transmit hypermedia documents such as HTML. The HTTP protocol is used to transmit or post data from a web server to a web browser or other client.
HTML, CSS, and JavaScript: HTML is a language used to describe web pages. CSS is a style used to express the structure of HTML. JavaScript is a scripting language for web programming, mainly used to achieve dynamic effects on web pages and interact with users.
DOM: The Document Object Model (DOM) is a cross-platform programming interface for processing HTML and XML documents. DOM treats a document as a tree structure, where each node represents a part (such as an element, attribute, or text).
URL: A Uniform Resource Locator (URL) is a string of characters used to specify the location of an Internet resource.
Request Headers: In HTTP requests, request headers contain information about the client's environment, browser, etc. Common request header fields include: User-Agent, Accept, Referer, etc.
Response Headers: In the HTTP response, the response header contains information about the server, response status code and other information. Common response header fields include: Content-Type, Content-Length, Server, etc.
Web crawler strategies: Some websites will adopt some strategies to prevent crawlers from crawling data, such as: blocking IP, limiting access speed, using JavaScript to dynamically load data, etc. In practical applications, we need to take corresponding countermeasures based on these strategies, such as using proxy IP, limiting crawler crawling speed, using browser simulation libraries (such as Selenium), etc.
The above is the detailed content of Python crawler technology introduction example code analysis. For more information, please follow other related articles on the PHP Chinese website!

Pythonlistsareimplementedasdynamicarrays,notlinkedlists.1)Theyarestoredincontiguousmemoryblocks,whichmayrequirereallocationwhenappendingitems,impactingperformance.2)Linkedlistswouldofferefficientinsertions/deletionsbutslowerindexedaccess,leadingPytho

Pythonoffersfourmainmethodstoremoveelementsfromalist:1)remove(value)removesthefirstoccurrenceofavalue,2)pop(index)removesandreturnsanelementataspecifiedindex,3)delstatementremoveselementsbyindexorslice,and4)clear()removesallitemsfromthelist.Eachmetho

Toresolvea"Permissiondenied"errorwhenrunningascript,followthesesteps:1)Checkandadjustthescript'spermissionsusingchmod xmyscript.shtomakeitexecutable.2)Ensurethescriptislocatedinadirectorywhereyouhavewritepermissions,suchasyourhomedirectory.

ArraysarecrucialinPythonimageprocessingastheyenableefficientmanipulationandanalysisofimagedata.1)ImagesareconvertedtoNumPyarrays,withgrayscaleimagesas2Darraysandcolorimagesas3Darrays.2)Arraysallowforvectorizedoperations,enablingfastadjustmentslikebri

Arraysaresignificantlyfasterthanlistsforoperationsbenefitingfromdirectmemoryaccessandfixed-sizestructures.1)Accessingelements:Arraysprovideconstant-timeaccessduetocontiguousmemorystorage.2)Iteration:Arraysleveragecachelocalityforfasteriteration.3)Mem

Arraysarebetterforelement-wiseoperationsduetofasteraccessandoptimizedimplementations.1)Arrayshavecontiguousmemoryfordirectaccess,enhancingperformance.2)Listsareflexiblebutslowerduetopotentialdynamicresizing.3)Forlargedatasets,arrays,especiallywithlib

Mathematical operations of the entire array in NumPy can be efficiently implemented through vectorized operations. 1) Use simple operators such as addition (arr 2) to perform operations on arrays. 2) NumPy uses the underlying C language library, which improves the computing speed. 3) You can perform complex operations such as multiplication, division, and exponents. 4) Pay attention to broadcast operations to ensure that the array shape is compatible. 5) Using NumPy functions such as np.sum() can significantly improve performance.

In Python, there are two main methods for inserting elements into a list: 1) Using the insert(index, value) method, you can insert elements at the specified index, but inserting at the beginning of a large list is inefficient; 2) Using the append(value) method, add elements at the end of the list, which is highly efficient. For large lists, it is recommended to use append() or consider using deque or NumPy arrays to optimize performance.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Mac version
God-level code editing software (SublimeText3)

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.
