search
HomeBackend DevelopmentPython TutorialScrapy implements crawling and analysis of WeChat public account articles

Scrapy implements crawling and analysis of WeChat public account articles

WeChat is a very popular social media application in recent years, and the public accounts operated in it also play a very important role. As we all know, WeChat public accounts are an ocean of information and knowledge, because each public account can publish articles, graphic messages and other information. This information can be widely used in many fields, such as media reports, academic research, etc.

So, this article will introduce how to use the Scrapy framework to crawl and analyze WeChat public account articles. Scrapy is a Python web crawler framework whose main function is data mining and information search. Therefore, Scrapy is very customizable and efficient.

  1. Install Scrapy and create a project

To use the Scrapy framework for crawling, you first need to install Scrapy and other dependencies. You can use the pip command to install. The installation process is as follows:

pip install scrapy
pip install pymongo
pip install mysql-connector-python

After installing Scrapy, we need to use the Scrapy command line tool to create the project. The command is as follows:

scrapy startproject wechat

After executing this command, Scrapy will create a project named "wechat" and create many files and directories in the project directory.

  1. Implement crawling of WeChat public account articles

Before we start crawling, we need to first understand the URL format of the WeChat public account article page. The URL of a typical WeChat public account article page looks like this:

https://mp.weixin.qq.com/s?__biz=XXX&mid=XXX&idx=1&sn=XXX&chksm=XXX#wechat_redirect

Among them, __biz represents the ID of the WeChat public account, mid represents the ID of the article, idx represents the serial number of the article, sn represents the signature of the article, and chksm represents Content verification. Therefore, if we want to crawl all articles of a certain official account, we need to find the ID of the official account and use it to build the URL. Among them, biz_id is the unique identifier of the official account.

First of all, we need to prepare a list containing many official account IDs, because we want to crawl the articles of these official accounts. The collection of IDs can be achieved through various means. Here, we use a list containing several test IDs as an example:

biz_ids = ['MzU5MjcwMzA4MA==', 'MzI4MzMwNDgwMQ==', 'MzAxMTcyMzg2MA==']

Next, we need to write a Spider to crawl all articles of a certain public account. Here, we pass the name and ID of the official account to Spider so that we can handle different official account IDs.

import scrapy
import re

class WeChatSpider(scrapy.Spider):
    name = "wechat"
    allowed_domains = ["mp.weixin.qq.com"]
    
    def __init__(self, name=None, biz_id=None):
        super().__init__(name=name)
        self.start_urls = ['https://mp.weixin.qq.com/mp/profile_ext?action=home&__biz={}==#wechat_redirect'.format(biz_id)]

    def parse(self, response):
        article_urls = response.xpath('//h4[1]/a/@href')
        for url in article_urls.extract():
            yield scrapy.Request(url, callback=self.parse_article)
        
        next_page = response.xpath('//a[@id="js_next"]/@href')
        if next_page:
            yield scrapy.Request(response.urljoin(next_page[0].extract()), callback=self.parse)
    
    def parse_article(self, response):
        url = response.url
        title = response.xpath('//h2[@class="rich_media_title"]/text()')
        yield {'url': url, 'title': title.extract_first().strip()}

The main function of Spider is to use the given official account ID to access the official account homepage, and then recursively traverse each page to extract the URLs of all articles. In addition, the parse_article method is used to extract the URL and title of the article for subsequent processing. Overall, this spider is not very complex, but the extraction speed is slow.

Finally, we need to enter the following command in Terminal to start Spider:

scrapy crawl wechat -a biz_id=XXXXXXXX

Similarly, we can also crawl multiple official accounts, we only need to specify the names of all official accounts in the command Just the ID:

scrapy crawl wechat -a biz_id=ID1,ID2,ID3
  1. Storing article data

After crawling the article, we need to save the title and URL of the article to a database (such as MongoDB, MySQL, etc.). Here, we will use the pymongo library to save the crawled data.

import pymongo

class MongoPipeline(object):
    collection_name = 'wechat'

    def __init__(self, mongo_uri, mongo_db):
        self.mongo_uri = mongo_uri
        self.mongo_db = mongo_db

    @classmethod
    def from_crawler(cls, crawler):
        return cls(
            mongo_uri=crawler.settings.get('MONGO_URI'),
            mongo_db=crawler.settings.get('MONGO_DATABASE', 'items')
        )

    def open_spider(self, spider):
        self.client = pymongo.MongoClient(self.mongo_uri)
        self.db = self.client[self.mongo_db]

    def close_spider(self, spider):
        self.client.close()

    def process_item(self, item, spider):
        self.db[self.collection_name].insert_one(dict(item))
        return item

In this Pipeline, we use MongoDB as the backend to store data. This class can be modified as needed to use other database systems.

Next, we need to configure database-related parameters in the settings.py file:

MONGO_URI = 'mongodb://localhost:27017/'
MONGO_DATABASE = 'wechat'
ITEM_PIPELINES = {'myproject.pipelines.MongoPipeline': 300}

Finally, we call Pipeline in Spider to store data in MongoDB:

class WeChatSpider(scrapy.Spider):
    name = "wechat"
    allowed_domains = ["mp.weixin.qq.com"]
    
    def __init__(self, name=None, biz_id=None):
        super().__init__(name=name)
        self.start_urls = ['https://mp.weixin.qq.com/mp/profile_ext?action=home&__biz={}==#wechat_redirect'.format(biz_id)]

    def parse(self, response):
        article_urls = response.xpath('//h4[1]/a/@href')
        for url in article_urls.extract():
            yield scrapy.Request(url, callback=self.parse_article)
        
        next_page = response.xpath('//a[@id="js_next"]/@href')
        if next_page:
            yield scrapy.Request(response.urljoin(next_page[0].extract()), callback=self.parse)
            
    def parse_article(self, response):
        url = response.url
        title = response.xpath('//h2[@class="rich_media_title"]/text()')
        yield {'url': url, 'title': title.extract_first().strip()}

        pipeline = response.meta.get('pipeline')
        if pipeline:
            item = dict()
            item['url'] = url
            item['title'] = title.extract_first().strip()
            yield item

In the above code, response.meta.get('pipeline') is used to obtain the Pipeline object we set in Spider. Therefore, just add the following code to the Spider code to support Pipeline:

yield scrapy.Request(url, callback=self.parse_article, meta={'pipeline': 1})
  1. Data Analysis

Finally, we will use libraries such as Scrapy and pandas to Implement data analysis and visualization.

Here we will extract the data we crawled from MongoDB and save it to a CSV file. Subsequently, we can use pandas to process and visualize the CSV file.

The following is the implementation process:

import pandas as pd
from pymongo import MongoClient

client = MongoClient('mongodb://localhost:27017/')
db = client['wechat']
articles = db['wechat']

cursor = articles.find()
doc = list(cursor)

df = pd.DataFrame(doc)
df.to_csv('wechat.csv', encoding='utf-8')

df.groupby('biz_id')['title'].count().plot(kind='bar')

In the above code, we use the MongoDB and Pandas libraries to save the crawled data into the data folder of the CSV file. Subsequently, we used the powerful data analysis function of Pandas to visually display the number of articles of each public account.

The above is the detailed content of Scrapy implements crawling and analysis of WeChat public account articles. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Are Python lists dynamic arrays or linked lists under the hood?Are Python lists dynamic arrays or linked lists under the hood?May 07, 2025 am 12:16 AM

Pythonlistsareimplementedasdynamicarrays,notlinkedlists.1)Theyarestoredincontiguousmemoryblocks,whichmayrequirereallocationwhenappendingitems,impactingperformance.2)Linkedlistswouldofferefficientinsertions/deletionsbutslowerindexedaccess,leadingPytho

How do you remove elements from a Python list?How do you remove elements from a Python list?May 07, 2025 am 12:15 AM

Pythonoffersfourmainmethodstoremoveelementsfromalist:1)remove(value)removesthefirstoccurrenceofavalue,2)pop(index)removesandreturnsanelementataspecifiedindex,3)delstatementremoveselementsbyindexorslice,and4)clear()removesallitemsfromthelist.Eachmetho

What should you check if you get a 'Permission denied' error when trying to run a script?What should you check if you get a 'Permission denied' error when trying to run a script?May 07, 2025 am 12:12 AM

Toresolvea"Permissiondenied"errorwhenrunningascript,followthesesteps:1)Checkandadjustthescript'spermissionsusingchmod xmyscript.shtomakeitexecutable.2)Ensurethescriptislocatedinadirectorywhereyouhavewritepermissions,suchasyourhomedirectory.

How are arrays used in image processing with Python?How are arrays used in image processing with Python?May 07, 2025 am 12:04 AM

ArraysarecrucialinPythonimageprocessingastheyenableefficientmanipulationandanalysisofimagedata.1)ImagesareconvertedtoNumPyarrays,withgrayscaleimagesas2Darraysandcolorimagesas3Darrays.2)Arraysallowforvectorizedoperations,enablingfastadjustmentslikebri

For what types of operations are arrays significantly faster than lists?For what types of operations are arrays significantly faster than lists?May 07, 2025 am 12:01 AM

Arraysaresignificantlyfasterthanlistsforoperationsbenefitingfromdirectmemoryaccessandfixed-sizestructures.1)Accessingelements:Arraysprovideconstant-timeaccessduetocontiguousmemorystorage.2)Iteration:Arraysleveragecachelocalityforfasteriteration.3)Mem

Explain the performance differences in element-wise operations between lists and arrays.Explain the performance differences in element-wise operations between lists and arrays.May 06, 2025 am 12:15 AM

Arraysarebetterforelement-wiseoperationsduetofasteraccessandoptimizedimplementations.1)Arrayshavecontiguousmemoryfordirectaccess,enhancingperformance.2)Listsareflexiblebutslowerduetopotentialdynamicresizing.3)Forlargedatasets,arrays,especiallywithlib

How can you perform mathematical operations on entire NumPy arrays efficiently?How can you perform mathematical operations on entire NumPy arrays efficiently?May 06, 2025 am 12:15 AM

Mathematical operations of the entire array in NumPy can be efficiently implemented through vectorized operations. 1) Use simple operators such as addition (arr 2) to perform operations on arrays. 2) NumPy uses the underlying C language library, which improves the computing speed. 3) You can perform complex operations such as multiplication, division, and exponents. 4) Pay attention to broadcast operations to ensure that the array shape is compatible. 5) Using NumPy functions such as np.sum() can significantly improve performance.

How do you insert elements into a Python array?How do you insert elements into a Python array?May 06, 2025 am 12:14 AM

In Python, there are two main methods for inserting elements into a list: 1) Using the insert(index, value) method, you can insert elements at the specified index, but inserting at the beginning of a large list is inefficient; 2) Using the append(value) method, add elements at the end of the list, which is highly efficient. For large lists, it is recommended to use append() or consider using deque or NumPy arrays to optimize performance.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft