Home > Article > Backend Development > Python crawler technology introduction example code analysis
Crawler: a program that automatically obtains network data.
Web page structure: HTML, CSS, JavaScript, etc.
HTTP request: The way the client requests data from the server.
HTTP response: Data returned by the server to the client.
Use Python's requests library to send HTTP requests.
import requests url = "https://www.example.com" response = requests.get(url)
Get response content
html_content = response.text
Use the BeautifulSoup library to parse HTML content.
from bs4 import BeautifulSoup soup = BeautifulSoup(html_content, "html.parser")
Use CSS selectors or other methods to extract data.
title = soup.title.string
Send a request to obtain the HTML content of the Jianshu website's homepage.
import requests from bs4 import BeautifulSoup url = "https://www.jianshu.com" response = requests.get(url) html_content = response.text
Store data in JSON format.
import json with open("jianshu_articles.json", "w", encoding="utf-8") as f: json.dump(article_info_list, f, ensure_ascii=False, indent=4)
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"} response = requests.get(url, headers=headers)
import time time.sleep(10)
try: response = requests.get(url, headers=headers, timeout=5) response.raise_for_status() except requests.exceptions.RequestException as e: print(f"Error: {e}")
import requests from bs4 import BeautifulSoup import json import time def fetch_jianshu_articles(): url = "https://www.jianshu.com" headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"} try: response = requests.get(url, headers=headers, timeout=5) response.raise_for_status() except requests.exceptions.RequestException as e: print(f"Error: {e}") return html_content = response.text soup = BeautifulSoup(html_content, "html.parser") articles = soup.find_all("div", class_="content") article_info_list = [] for article in articles: title = article.h3.text.strip() author = article.find("span", class_="name").text.strip() link = url + article.h3.a["href"] article_info = {"title": title, "author": author, "link": link} article_info_list.append(article_info) return article_info_list def save_to_json(article_info_list, filename): with open(filename, "w", encoding="utf-8") as f: json.dump(article_info_list, f, ensure_ascii=False, indent=4) if __name__ == "__main__": article_info_list = fetch_jianshu_articles() if article_info_list: save_to_json(article_info_list, "jianshu_articles.json") print("Jianshu articles saved to 'jianshu_articles.json'.") else: print("Failed to fetch Jianshu articles.")
In order to better understand this practical project, we need to understand some basic concepts and principles, which will help to master Python network programming and crawler technology. Here are some basic web crawling concepts:
HTTP Protocol: Hypertext Transfer Protocol (HTTP) is an application layer protocol used to transmit hypermedia documents such as HTML. The HTTP protocol is used to transmit or post data from a web server to a web browser or other client.
HTML, CSS, and JavaScript: HTML is a language used to describe web pages. CSS is a style used to express the structure of HTML. JavaScript is a scripting language for web programming, mainly used to achieve dynamic effects on web pages and interact with users.
DOM: The Document Object Model (DOM) is a cross-platform programming interface for processing HTML and XML documents. DOM treats a document as a tree structure, where each node represents a part (such as an element, attribute, or text).
URL: A Uniform Resource Locator (URL) is a string of characters used to specify the location of an Internet resource.
Request Headers: In HTTP requests, request headers contain information about the client's environment, browser, etc. Common request header fields include: User-Agent, Accept, Referer, etc.
Response Headers: In the HTTP response, the response header contains information about the server, response status code and other information. Common response header fields include: Content-Type, Content-Length, Server, etc.
Web crawler strategies: Some websites will adopt some strategies to prevent crawlers from crawling data, such as: blocking IP, limiting access speed, using JavaScript to dynamically load data, etc. In practical applications, we need to take corresponding countermeasures based on these strategies, such as using proxy IP, limiting crawler crawling speed, using browser simulation libraries (such as Selenium), etc.
The above is the detailed content of Python crawler technology introduction example code analysis. For more information, please follow other related articles on the PHP Chinese website!