Home  >  Article  >  Backend Development  >  Example diagram of using python Selenium to crawl content and store MySQL database

Example diagram of using python Selenium to crawl content and store MySQL database

高洛峰
高洛峰Original
2017-03-17 09:30:262217browse

This article mainly introduces the implementation code of python Selenium crawling content and storing it in MySQL database. Friends who need it can refer to

I passed it earlier An article describes how to crawl CSDN blog summaries and other information. Usually, after using Selenium crawler to crawl data, it needs to be stored in TXT text, but this is difficult to perform data processing and data analysis. This article mainly talks about crawling my personal blog information through Selenium, and then storing it in the database MySQL to analyze the data, such as analyzing which time period there are more blogs, combining WordCloud to analyze the topic of the article, article reading ranking, etc. .
This is a basic article. I hope it will be helpful to you. If there are any errors or deficiencies in the article, please forgive me. The next article will briefly explain the process of data analysis.

1. Crawling results
The crawled address is: http://blog.csdn.net/Eastmount

使用python Selenium爬取内容并存储MySQL数据库的实例图解


## The result of crawling and stored to the MySQL database is shown below:


使用python Selenium爬取内容并存储MySQL数据库的实例图解

## The operation process is as shown below as shown below Shown:




使用python Selenium爬取内容并存储MySQL数据库的实例图解

2. Complete code analysis

The complete code is as follows:

# coding=utf-8 
 
from selenium import webdriver 
from selenium.webdriver.common.keys import Keys 
import selenium.webdriver.support.ui as ui   
import re
import time
import os
import codecs
import MySQLdb
 
#打开Firefox浏览器 设定等待加载时间 
driver = webdriver.Firefox()
wait = ui.WebDriverWait(driver,10) 

#获取每个博主的博客页面低端总页码  
def getPage():
 print 'getPage'
 number = 0  
 texts = driver.find_element_by_xpath("//p[@id='papelist']").text  
 print '页码', texts  
 m = re.findall(r'(\w*[0-9]+)\w*',texts) #正则表达式寻找数字  
 print '页数:' + str(m[1])  
 return int(m[1]) 
 
#主函数 
def main():
 #获取txt文件总行数
 count = len(open("Blog_URL.txt",'rU').readlines())
 print count
 n = 0
 urlfile = open("Blog_URL.txt",'r')

 #循环获取每个博主的文章摘信息 
 while n < count: #这里爬取2个人博客信息,正常情况count个博主信息
  url = urlfile.readline()
  url = url.strip("\n")
  print url
  driver.get(url)
  #获取总页码
  allPage = getPage()
  print u&#39;页码总数为:&#39;, allPage
  time.sleep(2)

  #数据库操作结合
  try:
   conn=MySQLdb.connect(host=&#39;localhost&#39;,user=&#39;root&#39;,
         passwd=&#39;123456&#39;,port=3306, db=&#39;test01&#39;)
   cur=conn.cursor() #数据库游标

   #报错:UnicodeEncodeError: &#39;latin-1&#39; codec can&#39;t encode character
   conn.set_character_set(&#39;utf8&#39;)
   cur.execute(&#39;SET NAMES utf8;&#39;)
   cur.execute(&#39;SET CHARACTER SET utf8;&#39;)
   cur.execute(&#39;SET character_set_connection=utf8;&#39;)
   
   #具体内容处理
   m = 1 #第1页
   while m <= allPage:
    ur = url + "/article/list/" + str(m)
    print ur
    driver.get(ur)
    
    #标题
    article_title = driver.find_elements_by_xpath("//p[@class=&#39;article_title&#39;]")
    for title in article_title:
     #print url
     con = title.text
     con = con.strip("\n")
     #print con + &#39;\n&#39;
    
    #摘要
    article_description = driver.find_elements_by_xpath("//p[@class=&#39;article_description&#39;]")
    for description in article_description:
     con = description.text
     con = con.strip("\n")
     #print con + &#39;\n&#39;

    #信息
    article_manage = driver.find_elements_by_xpath("//p[@class=&#39;article_manage&#39;]")
    for manage in article_manage:
     con = manage.text
     con = con.strip("\n")
     #print con + &#39;\n&#39;

    num = 0
    print u&#39;长度&#39;, len(article_title)
    while num < len(article_title):
     #插入数据 8个值
     sql = &#39;&#39;&#39;insert into csdn_blog
        (URL,Author,Artitle,Description,Manage,FBTime,YDNum,PLNum)
       values(%s, %s, %s, %s, %s, %s, %s, %s)&#39;&#39;&#39;
     Artitle = article_title[num].text
     Description = article_description[num].text
     Manage = article_manage[num].text
     print Artitle
     print Description
     print Manage
     #获取作者
     Author = url.split(&#39;/&#39;)[-1]
     #获取阅读数和评论数
     mode = re.compile(r&#39;\d+\.?\d*&#39;)
     YDNum = mode.findall(Manage)[-2]
     PLNum = mode.findall(Manage)[-1]
     print YDNum
     print PLNum
     #获取发布时间
     end = Manage.find(u&#39; 阅读&#39;)
     FBTime = Manage[:end]
     cur.execute(sql, (url, Author, Artitle, Description, Manage,FBTime,YDNum,PLNum)) 
     
     num = num + 1
    else:
     print u&#39;数据库插入成功&#39;    
    m = m + 1
     
  
  #异常处理
  except MySQLdb.Error,e:
   print "Mysql Error %d: %s" % (e.args[0], e.args[1])
  finally:
   cur.close() 
   conn.commit() 
   conn.close()
  
  n = n + 1
    
 else:
  urlfile.close()
  print &#39;Load Over&#39;
   
main()

Place the blog address URL of the user that needs to be crawled in the Blog_Url.txt file, as shown in the figure below. Note that here, the author has pre-written a URL code to crawl all CSDN experts, which has been omitted here to access other people's resources to increase reading volume.

使用python Selenium爬取内容并存储MySQL数据库的实例图解The analysis process is as follows.

1. Get the blogger’s total page number
First read the blogger’s address from Blog_Url.txt, and then access and obtain the total page number. The code is as follows:

#获取每个博主的博客页面低端总页码  
def getPage():
 print &#39;getPage&#39;
 number = 0  
 texts = driver.find_element_by_xpath("//p[@id=&#39;papelist&#39;]").text  
 print &#39;页码&#39;, texts  
 m = re.findall(r&#39;(\w*[0-9]+)\w*&#39;,texts) #正则表达式寻找数字  
 print &#39;页数:&#39; + str(m[1])  
 return int(m[1])

For example, the total page number is 17 pages, as shown in the figure below:

使用python Selenium爬取内容并存储MySQL数据库的实例图解

2. Page turning DOM tree analysis

The blog page turning here uses URL connection, which is more convenient. For example: http://blog.csdn.net/Eastmount/article/list/2
So you only need: 1. Get the total page number; 2. Crawl the information of each page; 3. Set the URL to loop through Page; 4. Crawl again.
You can also click "Next Page" to jump. If there is no "Next Page", the jump will stop, the crawler will end, and then the next blogger will be crawled.

使用python Selenium爬取内容并存储MySQL数据库的实例图解

#3. Get details: title, abstract, time

Then review the elements and analyze each blog page, if using Beaut if
ulSoup crawling will report an error "Forbidden". It is found that each article is composed of ae388a4556c0f65e1904146cc1a846bee94b3e26ee717c64999d7867364b1b4a3, as shown below, you only need to locate the position.

使用python Selenium爬取内容并存储MySQL数据库的实例图解 This can be climbed here to position the location. Here you need to locate the title, Abstract, and time.

使用python Selenium爬取内容并存储MySQL数据库的实例图解

代码如下所示。注意,在while中同时获取三个值,它们是对应的。


#标题
article_title = driver.find_elements_by_xpath("//p[@class=&#39;article_title&#39;]")
for title in article_title:
 con = title.text
 con = con.strip("\n")
 print con + &#39;\n&#39;
    
#摘要
article_description = driver.find_elements_by_xpath("//p[@class=&#39;article_description&#39;]")
for description in article_description:
 con = description.text
 con = con.strip("\n")
 print con + &#39;\n&#39;

#信息
article_manage = driver.find_elements_by_xpath("//p[@class=&#39;article_manage&#39;]")
for manage in article_manage:
 con = manage.text
 con = con.strip("\n")
 print con + &#39;\n&#39;

num = 0
print u&#39;长度&#39;, len(article_title)
while num < len(article_title):
 Artitle = article_title[num].text
 Description = article_description[num].text
 Manage = article_manage[num].text
 print Artitle, Description, Manage

 4.特殊字符串处理
获取URL最后一个/后的博主名称、获取字符串时间、阅读数代码如下:


#获取博主姓名
url = "http://blog.csdn.net/Eastmount"
print url.split(&#39;/&#39;)[-1]
#输出: Eastmount

#获取数字
name = "2015-09-08 18:06 阅读(909) 评论(0)"
print name
import re
mode = re.compile(r&#39;\d+\.?\d*&#39;) 
print mode.findall(name)
#输出: [&#39;2015&#39;, &#39;09&#39;, &#39;08&#39;, &#39;18&#39;, &#39;06&#39;, &#39;909&#39;, &#39;0&#39;]
print mode.findall(name)[-2]
#输出: 909


#获取时间
end = name.find(r&#39; 阅读&#39;)
print name[:end]
#输出: 2015-09-08 18:06

import time, datetime
a = time.strptime(name[:end],&#39;%Y-%m-%d %H:%M&#39;)
print a
#输出: time.struct_time(tm_year=2015, tm_mon=9, tm_mday=8, tm_hour=18, tm_min=6,
#  tm_sec=0, tm_wday=1, tm_yday=251, tm_isdst=-1)

三. 数据库相关操作
SQL语句创建表代码如下:


CREATE TABLE `csdn` (
 `ID` int(11) NOT NULL AUTO_INCREMENT,
 `URL` varchar(100) COLLATE utf8_bin DEFAULT NULL,
 `Author` varchar(50) COLLATE utf8_bin DEFAULT NULL COMMENT &#39;作者&#39;,
 `Artitle` varchar(100) COLLATE utf8_bin DEFAULT NULL COMMENT &#39;标题&#39;,
 `Description` varchar(400) COLLATE utf8_bin DEFAULT NULL COMMENT &#39;摘要&#39;,
 `Manage` varchar(100) COLLATE utf8_bin DEFAULT NULL COMMENT &#39;信息&#39;,
 `FBTime` datetime DEFAULT NULL COMMENT &#39;发布日期&#39;,
 `YDNum` int(11) DEFAULT NULL COMMENT &#39;阅读数&#39;,
 `PLNum` int(11) DEFAULT NULL COMMENT &#39;评论数&#39;,
 `DZNum` int(11) DEFAULT NULL COMMENT &#39;点赞数&#39;,
 PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=9371 DEFAULT CHARSET=utf8 COLLATE=utf8_bin;

显示如下图所示:

使用python Selenium爬取内容并存储MySQL数据库的实例图解

其中,Python调用MySQL推荐下面这篇文字。
python专题九.Mysql数据库编程基础知识
核心代码如下所示:


# coding:utf-8 
import MySQLdb
 
try:
 conn=MySQLdb.connect(host=&#39;localhost&#39;,user=&#39;root&#39;,passwd=&#39;123456&#39;,port=3306, db=&#39;test01&#39;)
 cur=conn.cursor()
 
 #插入数据
 sql = &#39;&#39;&#39;insert into student values(%s, %s, %s)&#39;&#39;&#39;
 cur.execute(sql, (&#39;yxz&#39;,&#39;111111&#39;, &#39;10&#39;))

 #查看数据
 print u&#39;\n插入数据:&#39;
 cur.execute(&#39;select * from student&#39;)
 for data in cur.fetchall():
  print &#39;%s %s %s&#39; % data
 cur.close()
 conn.commit()
 conn.close()
except MySQLdb.Error,e:
  print "Mysql Error %d: %s" % (e.args[0], e.args[1])

注意,在下载过程中,有的网站是新版本的,无法获取页码。
比如:http://blog.csdn.net/michaelzhou224
这时需要简单设置,跳过这些链接,并保存到文件中,核心代码如下所示:


#获取每个博主的博客页面低端总页码  
def getPage():
 print &#39;getPage&#39;
 number = 0  
 #texts = driver.find_element_by_xpath("//p[@id=&#39;papelist&#39;]").text
 texts = driver.find_element_by_xpath("//p[@class=&#39;pagelist&#39;]").text
 print &#39;testsss&#39;
 print u&#39;页码&#39;, texts
 if texts=="":
  print u&#39;页码为0 网站错误&#39;
  return 0
 m = re.findall(r&#39;(\w*[0-9]+)\w*&#39;,texts) #正则表达式寻找数字  
 print u&#39;页数:&#39; + str(m[1])  
 return int(m[1])

主函数修改:


 error = codecs.open("Blog_Error.txt", &#39;a&#39;, &#39;utf-8&#39;)

 #循环获取每个博主的文章摘信息 
 while n < count: #这里爬取2个人博客信息,正常情况count个博主信息
  url = urlfile.readline()
  url = url.strip("\n")
  print url
  driver.get(url+"/article/list/1")
  #print driver.page_source
  #获取总页码
  allPage = getPage()
  print u&#39;页码总数为:&#39;, allPage
  #返回错误,否则程序总截住
  if allPage==0:
   error.write(url + "\r\n")
   print u&#39;错误URL&#39;
   continue; #跳过进入下一个博主
  time.sleep(2)
  #数据库操作结合
  try:
    .....

The above is the detailed content of Example diagram of using python Selenium to crawl content and store MySQL database. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn