search
HomeBackend DevelopmentPython Tutorial零基础写python爬虫之抓取百度贴吧并存储到本地txt文件改进版

百度贴吧的爬虫制作和糗百的爬虫制作原理基本相同,都是通过查看源码扣出关键数据,然后将其存储到本地txt文件。

项目内容:

用Python写的百度贴吧的网络爬虫。

使用方法:

新建一个BugBaidu.py文件,然后将代码复制到里面后,双击运行。

程序功能:

将贴吧中楼主发布的内容打包txt存储到本地。

原理解释:

首先,先浏览一下某一条贴吧,点击只看楼主并点击第二页之后url发生了一点变化,变成了:
http://tieba.baidu.com/p/2296712428?see_lz=1&pn=1
可以看出来,see_lz=1是只看楼主,pn=1是对应的页码,记住这一点为以后的编写做准备。
这就是我们需要利用的url。
接下来就是查看页面源码。
首先把题目抠出来存储文件的时候会用到。
可以看到百度使用gbk编码,标题使用h1标记:

复制代码 代码如下:

【原创】时尚首席(关于时尚,名利,事业,爱情,励志)

 

同样,正文部分用div和class综合标记,接下来要做的只是用正则表达式来匹配即可。
运行截图:

生成的txt文件:

复制代码 代码如下:

# -*- coding: utf-8 -*- 
#--------------------------------------- 
#   程序:百度贴吧爬虫 
#   版本:0.5 
#   作者:why 
#   日期:2013-05-16 
#   语言:Python 2.7 
#   操作:输入网址后自动只看楼主并保存到本地文件 
#   功能:将楼主发布的内容打包txt存储到本地。 
#--------------------------------------- 
  
import string 
import urllib2 
import re 
 
#----------- 处理页面上的各种标签 ----------- 
class HTML_Tool: 
    # 用非 贪婪模式 匹配 \t 或者 \n 或者 空格 或者 超链接 或者 图片 
    BgnCharToNoneRex = re.compile("(\t|\n| ||零基础写python爬虫之抓取百度贴吧并存储到本地txt文件改进版)") 
     
    # 用非 贪婪模式 匹配 任意标签 
    EndCharToNoneRex = re.compile("<.>") 
 
    # 用非 贪婪模式 匹配 任意

标签 
    BgnPartRex = re.compile("

") 
    CharToNewLineRex = re.compile("(
|||
|
)") 
    CharToNextTabRex = re.compile("") 
 
    # 将一些html的符号实体转变为原始符号 
    replaceTab = [("",">"),("&","&"),("&","\""),(" "," ")] 
     
    def Replace_Char(self,x): 
        x = self.BgnCharToNoneRex.sub("",x) 
        x = self.BgnPartRex.sub("\n    ",x) 
        x = self.CharToNewLineRex.sub("\n",x) 
        x = self.CharToNextTabRex.sub("\t",x) 
        x = self.EndCharToNoneRex.sub("",x) 
 
        for t in self.replaceTab:   
            x = x.replace(t[0],t[1])   
        return x   
     
class Baidu_Spider: 
    # 申明相关的属性 
    def __init__(self,url):   
        self.myUrl = url + '?see_lz=1' 
        self.datas = [] 
        self.myTool = HTML_Tool() 
        print u'已经启动百度贴吧爬虫,咔嚓咔嚓' 
   
    # 初始化加载页面并将其转码储存 
    def baidu_tieba(self): 
        # 读取页面的原始信息并将其从gbk转码 
        myPage = urllib2.urlopen(self.myUrl).read().decode("gbk") 
        # 计算楼主发布内容一共有多少页 
        endPage = self.page_counter(myPage) 
        # 获取该帖的标题 
        title = self.find_title(myPage) 
        print u'文章名称:' + title 
        # 获取最终的数据 
        self.save_data(self.myUrl,title,endPage) 
 
    #用来计算一共有多少页 
    def page_counter(self,myPage): 
        # 匹配 "共有12页" 来获取一共有多少页 
        myMatch = re.search(r'class="red">(\d+?)', myPage, re.S) 
        if myMatch:   
            endPage = int(myMatch.group(1)) 
            print u'爬虫报告:发现楼主共有%d页的原创内容' % endPage 
        else: 
            endPage = 0 
            print u'爬虫报告:无法计算楼主发布内容有多少页!' 
        return endPage 
 
    # 用来寻找该帖的标题 
    def find_title(self,myPage): 
        # 匹配

xxxxxxxxxx

找出标题 
        myMatch = re.search(r'(.*?)', myPage, re.S) 
        title = u'暂无标题' 
        if myMatch: 
            title  = myMatch.group(1) 
        else: 
            print u'爬虫报告:无法加载文章标题!' 
        # 文件名不能包含以下字符: \ / : * ? " | 
        title = title.replace('\\','').replace('/','').replace(':','').replace('*','').replace('?','').replace('"','').replace('>','').replace('         return title 
 
    # 用来存储楼主发布的内容 
    def save_data(self,url,title,endPage): 
        # 加载页面数据到数组中 
        self.get_data(url,endPage) 
        # 打开本地文件 
        f = open(title+'.txt','w+') 
        f.writelines(self.datas) 
        f.close() 
        print u'爬虫报告:文件已下载到本地并打包成txt文件' 
        print u'请按任意键退出...' 
        raw_input(); 
 
    # 获取页面源码并将其存储到数组中 
    def get_data(self,url,endPage): 
        url = url + '&pn=' 
        for i in range(1,endPage+1): 
            print u'爬虫报告:爬虫%d号正在加载中...' % i 
            myPage = urllib2.urlopen(url + str(i)).read() 
            # 将myPage中的html代码处理并存储到datas里面 
            self.deal_data(myPage.decode('gbk')) 
             
    # 将内容从页面代码中抠出来 
    def deal_data(self,myPage): 
        myItems = re.findall('id="post_content.*?>(.*?)',myPage,re.S) 
        for item in myItems: 
            data = self.myTool.Replace_Char(item.replace("\n","").encode('gbk')) 
            self.datas.append(data+'\n') 

#-------- 程序入口处 ------------------ 
print u"""#---------------------------------------
#   程序:百度贴吧爬虫
#   版本:0.5
#   作者:why
#   日期:2013-05-16
#   语言:Python 2.7
#   操作:输入网址后自动只看楼主并保存到本地文件
#   功能:将楼主发布的内容打包txt存储到本地。
#---------------------------------------
""" 
# 以某小说贴吧为例子 
# bdurl = 'http://tieba.baidu.com/p/2296712428?see_lz=1&pn=1' 
 
print u'请输入贴吧的地址最后的数字串:' 
bdurl = 'http://tieba.baidu.com/p/' + str(raw_input(u'http://tieba.baidu.com/p/'))  
 
#调用 
mySpider = Baidu_Spider(bdurl) 
mySpider.baidu_tieba() 

以上就是改进之后的抓取百度贴吧的全部代码了,非常的简单实用吧,希望能对大家有所帮助

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
How do you create multi-dimensional arrays using NumPy?How do you create multi-dimensional arrays using NumPy?Apr 29, 2025 am 12:27 AM

Create multi-dimensional arrays with NumPy can be achieved through the following steps: 1) Use the numpy.array() function to create an array, such as np.array([[1,2,3],[4,5,6]]) to create a 2D array; 2) Use np.zeros(), np.ones(), np.random.random() and other functions to create an array filled with specific values; 3) Understand the shape and size properties of the array to ensure that the length of the sub-array is consistent and avoid errors; 4) Use the np.reshape() function to change the shape of the array; 5) Pay attention to memory usage to ensure that the code is clear and efficient.

Explain the concept of 'broadcasting' in NumPy arrays.Explain the concept of 'broadcasting' in NumPy arrays.Apr 29, 2025 am 12:23 AM

BroadcastinginNumPyisamethodtoperformoperationsonarraysofdifferentshapesbyautomaticallyaligningthem.Itsimplifiescode,enhancesreadability,andboostsperformance.Here'showitworks:1)Smallerarraysarepaddedwithonestomatchdimensions.2)Compatibledimensionsare

Explain how to choose between lists, array.array, and NumPy arrays for data storage.Explain how to choose between lists, array.array, and NumPy arrays for data storage.Apr 29, 2025 am 12:20 AM

ForPythondatastorage,chooselistsforflexibilitywithmixeddatatypes,array.arrayformemory-efficienthomogeneousnumericaldata,andNumPyarraysforadvancednumericalcomputing.Listsareversatilebutlessefficientforlargenumericaldatasets;array.arrayoffersamiddlegro

Give an example of a scenario where using a Python list would be more appropriate than using an array.Give an example of a scenario where using a Python list would be more appropriate than using an array.Apr 29, 2025 am 12:17 AM

Pythonlistsarebetterthanarraysformanagingdiversedatatypes.1)Listscanholdelementsofdifferenttypes,2)theyaredynamic,allowingeasyadditionsandremovals,3)theyofferintuitiveoperationslikeslicing,but4)theyarelessmemory-efficientandslowerforlargedatasets.

How do you access elements in a Python array?How do you access elements in a Python array?Apr 29, 2025 am 12:11 AM

ToaccesselementsinaPythonarray,useindexing:my_array[2]accessesthethirdelement,returning3.Pythonuseszero-basedindexing.1)Usepositiveandnegativeindexing:my_list[0]forthefirstelement,my_list[-1]forthelast.2)Useslicingforarange:my_list[1:5]extractselemen

Is Tuple Comprehension possible in Python? If yes, how and if not why?Is Tuple Comprehension possible in Python? If yes, how and if not why?Apr 28, 2025 pm 04:34 PM

Article discusses impossibility of tuple comprehension in Python due to syntax ambiguity. Alternatives like using tuple() with generator expressions are suggested for creating tuples efficiently.(159 characters)

What are Modules and Packages in Python?What are Modules and Packages in Python?Apr 28, 2025 pm 04:33 PM

The article explains modules and packages in Python, their differences, and usage. Modules are single files, while packages are directories with an __init__.py file, organizing related modules hierarchically.

What is docstring in Python?What is docstring in Python?Apr 28, 2025 pm 04:30 PM

Article discusses docstrings in Python, their usage, and benefits. Main issue: importance of docstrings for code documentation and accessibility.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor