Home  >  Article  >  Backend Development  >  Practical crawler combat in Python: Baidu knows crawlers

Practical crawler combat in Python: Baidu knows crawlers

王林
王林Original
2023-06-10 11:55:38634browse

As a powerful programming language, Python can help us obtain large amounts of data on the Internet more conveniently. Among them, crawler technology is a very representative part. Crawlers can obtain and analyze various data on the Internet, providing us with a large amount of valuable information. In Python, crawler technology can also be widely used. Baidu Zhizhi is a website that provides a large number of knowledge questions and answers. This article introduces the method of implementing Baidu Zhizhi crawler in Python.

  1. Start crawling

First, we need to understand how to crawl the Baidu website. In Python, you can use the requests library or the urlopen function in the urllib library to obtain the source code of the website. After obtaining the source code, we can use the BeautifulSoup library to parse the web page document to easily filter out the required information. Here, what we need to crawl is each question and the corresponding best answer. By looking at the source code that Baidu knows, we can find that each best answer has its own independent classID, and we can select the corresponding content based on this.

The following is the implementation process of the code:

import requests
from bs4 import BeautifulSoup

# 网页地址
url = "https://zhidao.baidu.com/question/2031956566959407839.html"

# 发送请求
r = requests.get(url)

# 解析网页
soup = BeautifulSoup(r.text, "html.parser")

# 获取问题
question = soup.find("span", class_="ask-title").text
print("问题: ", question)

# 获取最佳答案
answer = soup.find("pre", class_="best-text mb-10").text
print("最佳答案: ", answer)
  1. Crawling multiple questions and answers

Next, we need to crawl multiple questions and answers its answer. We can create a list of questions, crawl out each question and answer through a for loop, and then print it out. Since the suffix of each question URL on Baidu is different, we need to automatically generate the web page address that needs to be crawled through string formatting.

The following is the implementation code:

import requests
from bs4 import BeautifulSoup

# 创建问题列表
questions = [
    "2031956566959407839", 
    "785436012916117832", 
    "1265757662946113922", 
    "455270192556513192", 
    "842556478655981450"
]

# 循环爬取问题和最佳答案
for q in questions:
    # 根据问题ID拼接URL
    url = f"https://zhidao.baidu.com/question/{q}.html"

    # 发送请求
    r = requests.get(url)

    # 解析网页
    soup = BeautifulSoup(r.text, "html.parser")

    # 获取问题
    try:
        question = soup.find("span", class_="ask-title").text
    except:
        question = ""

    # 获取最佳答案
    try:
        answer = soup.find("pre", class_="best-text mb-10").text
    except:
        answer = ""

    # 打印问题和答案
    print("问题: ", question)
    print("最佳答案: ", answer)
    print("----------------------")
  1. Save the crawling results to the file

Finally, we save the crawling results to the file. You can use Python's built-in module csv to save each question and answer to a csv file. In addition, in order to avoid the problem of Chinese garbled characters, we can add BOM (Byte Order Mark) to the header of the csv file.

The following is the implementation code:

import requests
from bs4 import BeautifulSoup
import csv
import codecs

# 创建问题列表
questions = [
    "2031956566959407839", 
    "785436012916117832", 
    "1265757662946113922", 
    "455270192556513192", 
    "842556478655981450"
]

# 创建文件
with open("questions.csv", "w", newline='', encoding='utf-8-sig') as file:
    writer = csv.writer(file)
    writer.writerow(['问题', '最佳答案'])

    # 循环爬取问题和最佳答案
    for q in questions:
        # 根据问题ID拼接URL
        url = f"https://zhidao.baidu.com/question/{q}.html"

        # 发送请求
        r = requests.get(url)

        # 解析网页
        soup = BeautifulSoup(r.text, "html.parser")

        # 获取问题
        try:
            question = soup.find("span", class_="ask-title").text
        except:
            question = ""

        # 获取最佳答案
        try:
            answer = soup.find("pre", class_="best-text mb-10").text
        except:
            answer = ""

        # 保存到csv文件
        writer.writerow([question, answer])
  1. Summary

In this article, we introduced how to use Python to crawl the Baidu website. We learned how to use the requests and urllib libraries to send requests, use the BeautifulSoup library to parse web pages, and how to save the crawled results to a csv file. Through these methods, we can easily obtain data on the Internet and analyze it. Crawler technology plays a very important role in big data analysis in the Internet era. As a Python programmer, it is important to learn and master relevant knowledge.

The above is the detailed content of Practical crawler combat in Python: Baidu knows crawlers. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn