Home  >  Article  >  WeChat Applet  >  Crawl WeChat public account articles and save them as PDF files (Python method)

Crawl WeChat public account articles and save them as PDF files (Python method)

coldplay.xixi
coldplay.xixiforward
2020-08-29 17:14:485268browse

Crawl WeChat public account articles and save them as PDF files (Python method)

[Related learning recommendations: WeChat public account development tutorial

Preface

This is my first time writing a blog. The main content is to crawl articles from WeChat public accounts and save the articles locally in PDF format.

Crawling WeChat public account articles (using wechatsogou)

1. Installation

pip install wechatsogou --upgrade

wechatsogou is a WeChat public account crawler interface based on Sogou WeChat search

2. Usage method

The usage method is as follows

import wechatsogou
# captcha_break_time为验证码输入错误的重试次数,默认为1
ws_api = wechatsogou.WechatSogouAPI(captcha_break_time=3)
# 公众号名称
gzh_name = ''
# 将该公众号最近10篇文章信息以字典形式返回
data = ws_api.get_gzh_article_by_history(gzh_name)

data data structure:

{
    'gzh': {
        'wechat_name': '',  # 名称
        'wechat_id': '',  # 微信id
        'introduction': '',  # 简介
        'authentication': '',  # 认证
        'headimage': ''  # 头像
    },
    'article': [
        {
            'send_id': int,  # 群发id,注意不唯一,因为同一次群发多个消息,而群发id一致
            'datetime': int,  # 群发datatime 10位时间戳
            'type': '',  # 消息类型,均是49(在手机端历史消息页有其他类型,网页端最近10条消息页只有49),表示图文
            'main': int,  # 是否是一次群发的第一次消息 1 or 0
            'title': '',  # 文章标题
            'abstract': '',  # 摘要
            'fileid': int,  #
            'content_url': '',  # 文章链接
            'source_url': '',  # 阅读原文的链接
            'cover': '',  # 封面图
            'author': '',  # 作者
            'copyright_stat': int,  # 文章类型,例如:原创啊
        },
        ...
    ]
}

Two pieces of information need to be obtained here: article title and article url.

After getting the article url, you can convert the html page into a pdf file based on the url.

Generate PDF files

1.Install wkhtmltopdf

Download address: https://wkhtmltopdf.org/downloads.html

2.Install pdfkit

pip install pdfkit

3. How to use

import pdfkit
# 根据url生成pdf
pdfkit.from_url('http://baidu.com','out.pdf')
# 根据html文件生成pdf
pdfkit.from_file('test.html','out.pdf')
# 根据html代码生成pdf
pdfkit.from_string('Hello!','out.pdf')

If you directly use the article URL obtained above to generate pdf, there will be a problem that the pdf file does not display the article image.

Solution:

# 该方法根据文章url对html进行处理,使图片显示
content_info = ws_api.get_article_content(url)
# 得到html代码(代码不完整,需要加入head、body等标签)
html_code = content_info['content_html']

Then construct the complete html code based on html_code and call the pdfkit.from_string() method to generate the pdf file. At this time, you will find the pictures in the article It is displayed in the pdf file.

Complete code

import os
import pdfkit
import datetime
import wechatsogou

# 初始化API
ws_api = wechatsogou.WechatSogouAPI(captcha_break_time=3)


def url2pdf(url, title, targetPath):
    '''
    使用pdfkit生成pdf文件
    :param url: 文章url
    :param title: 文章标题
    :param targetPath: 存储pdf文件的路径
    '''
    try:
        content_info = ws_api.get_article_content(url)
    except:
        return False
    # 处理后的html
    html = f'''
    
    
    
        
        {title}
    
    
    

{title}

{content_info['content_html']} ''' try: pdfkit.from_string(html, targetPath + os.path.sep + f'{title}.pdf') except: # 部分文章标题含特殊字符,不能作为文件名 filename = datetime.datetime.now().strftime('%Y%m%d%H%M%S') + '.pdf' pdfkit.from_string(html, targetPath + os.path.sep + filename) if __name__ == '__main__': # 此处为要爬取公众号的名称 gzh_name = '' targetPath = os.getcwd() + os.path.sep + gzh_name # 如果不存在目标文件夹就进行创建 if not os.path.exists(targetPath): os.makedirs(targetPath) # 将该公众号最近10篇文章信息以字典形式返回 data = ws_api.get_gzh_article_by_history(gzh_name) article_list = data['article'] for article in article_list: url = article['content_url'] title = article['title'] url2pdf(url, title, targetPath)

Related learning recommendations: python tutorial

The above is the detailed content of Crawl WeChat public account articles and save them as PDF files (Python method). For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:csdn.net. If there is any infringement, please contact admin@php.cn delete