Home > Article > Backend Development > PHP implements crawling and analyzing Zhihu user data, php_PHP tutorial
Background description: Xiaoyan used the crawler written by curl of php to experimentally crawl 50,000 Zhihu users Basic information; at the same time, a simple analysis and presentation of the crawled data was performed.
PHP spider code and user dashboard display code are sorted and uploaded to github, and the code library is updated on personal blogs and public accounts. The program is only for entertainment and learning communication; if there is any infringement of Zhihu related rights and interests, please contact me as soon as possible to delete it .
No pictures, no truth
Screenshots of mobile analysis data
Screenshots of PC side analysis data
The entire crawling, analysis, and display process is roughly divided into the following steps. Xiaoyan will introduce them separately
curl crawls web page data
PHP’s curl extension is a library supported by PHP that allows you to connect and communicate with various servers using various types of protocols. It is a very convenient tool for crawling web pages. At the same time, it supports multi-thread expansion.
This program crawls the personal information page https://www.zhihu.com/people/xxx that Zhihu provides for users to access. The crawling process requires user cookies to obtain the page. Code directly
Get page cookie
Copy code The code is as follows:
// Log in to Zhihu, open the personal center, open the console, and get cookies
document.cookie
"_za=67254197-3wwb8d-43f6-94f0-fb0e2d521c31; _ga=GA1.2.2142818188.1433767929; q_c1=78ee1604225d47d08cddd8142a08288b23|1452172601 000|1452172601000; _xsrf=15f0639cbe6fb607560c075269064393; cap_id="N2QwMTExNGQ0YTY2NGVddlMGIyNmQ4NjdjOTU0YTM5MmQ=|1453444256|49fdc6b43dc51f 702b7d6575451e228f56cdaf5d"; __utmt=1; unlock_ticket= "QUJDTWpmM0lsZdd2dYQUFBQVlRSlZUVTNVb1ZaNDVoQXJlblVmWGJ0WGwyaHlDdVdscXdZU1VRPT0=|1453444421|c47a2afde1ff334d416bafb1cc267b41014c9d5f"; __utma=51854 390.21428dd18188.1433767929.1453187421.1453444257.3; __utmb=51854390.14.8.1453444425011; __utmc=51854390; __utmz=51854390.1452846679 .1.dd1.utmcsr=google|utmccn=(organic)|utmcmd=organic| utmctr=(not provided); __utmv=51854390.100-1|2=registration_date=20150823=1^dd3=entry_date=20150823=1"
Catch the personal center page
Use curl, carry cookies, and first grab my center page
/** * 通过用户名抓取个人中心页面并存储 * * @param $username str :用户名 flag * @return boolean :成功与否标志 */ public function spiderUser($username) { $cookie = "xxxx" ; $url_info = 'http://www.zhihu.com/people/' . $username; //此处cui-xiao-zhuai代表用户ID,可以直接看url获取本人id $ch = curl_init($url_info); //初始化会话 curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_COOKIE, $cookie); //设置请求COOKIE curl_setopt($ch, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); //将curl_exec()获取的信息以文件流的形式返回,而不是直接输出。 curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); $result = curl_exec($ch); file_put_contents('/home/work/zxdata_ch/php/zhihu_spider/file/'.$username.'.html',$result); return true; }
Regularly analyze web page data to analyze new links and further crawl
Storage the crawled web pages, 要想进行进一步的爬取,页面必须包含有可用于进一步爬取用户的链接
. Through the analysis of the Zhihu page, we found that there are followers and some likes and followed people on the personal center page.
As shown below
Copy code The code is as follows:
// New users were found in the crawled html page and can be used for crawlers
e3baa16372767ba4dd78af55006edcd5
ok, this way you can use yourself-》Follow people-》Follow people's followers-》. . . Continuous crawling. The next step is to extract the information through regular matching
Copy code The code is as follows:
// All users matched to the crawled page
preg_match_all('//people/([w-] )"/i', $str, $match_arr);
// Remove duplicates and merge into new user array, users can be further captured
self::$newUserArr = array_unique(array_merge($match_arr[1], self::$newUserArr));
At this point, the entire crawling process can proceed smoothly.
If you need to crawl a large amount of data, you can study curl_multi
and pcntl
for multi-threaded fast crawling, which will not be described here.
Analyze user data and provide analysis
Using regular expressions, you can further match more user data and directly code it.
// 获取用户头像 preg_match('/<img.+src=\"?([^\s]+\.(jpg|gif|bmp|bnp|png))\"?.+>/i', $str, $match_img); $img_url = $match_img[1]; // 匹配用户名: // <span class="name">崔小拽</span> preg_match('/<span.+class=\"?name\"?>([\x{4e00}-\x{9fa5}]+).+span>/u', $str, $match_name); $user_name = $match_name[1]; // 匹配用户简介 // class bio span 中文 preg_match('/<span.+class=\"?bio\"?.+\>([\x{4e00}-\x{9fa5}]+).+span>/u', $str, $match_title); $user_title = $match_title[1]; // 匹配性别 //<input type="radio" name="gender" value="1" checked="checked" class="male"/> 男 // gender value1 ;结束 中文 preg_match('/<input.+name=\"?gender\"?.+value=\"?1\"?.+([\x{4e00}-\x{9fa5}]+).+\;/u', $str, $match_sex); $user_sex = $match_sex[1]; // 匹配地区 //<span class="location item" title="北京"> preg_match('/<span.+class=\"?location.+\"?.+\"([\x{4e00}-\x{9fa5}]+)\">/u', $str, $match_city); $user_city = $match_city[1]; // 匹配工作 //<span class="employment item" title="人见人骂的公司">人见人骂的公司</span> preg_match('/<span.+class=\"?employment.+\"?.+\"([\x{4e00}-\x{9fa5}]+)\">/u', $str, $match_employment); $user_employ = $match_employment[1]; // 匹配职位 // <span class="position item" title="程序猿"><a href="/topic/19590046" title="程序猿" class="topic-link" data-token="19590046" data-topicid="13253">程序猿</a></span> preg_match('/<span.+class=\"?position.+\"?.+\"([\x{4e00}-\x{9fa5}]+).+\">/u', $str, $match_position); $user_position = $match_position[1]; // 匹配学历 // <span class="education item" title="研究僧">研究僧</span> preg_match('/<span.+class=\"?education.+\"?.+\"([\x{4e00}-\x{9fa5}]+)\">/u', $str, $match_education); $user_education = $match_education[1]; // 工作情况 // <span class="education-extra item" title='挨踢'>挨踢</span> preg_match('/<span.+class=\"?education-extra.+\"?.+>([\x{4e00}- \x{9fa5}]+)</u', $str, $match_education_extra); $user_education_extra = $match_education_extra[1]; // 匹配关注话题数量 // class="zg-link-litblue"><strong>41 个话题</strong></a> preg_match('/class=\"?zg-link-litblue\"?><strong>(\d+)\s.+strong>/i', $str, $match_topic); $user_topic = $match_topic[1]; // 关注人数 // <span class="zg-gray-normal">关注了 preg_match_all('/<strong>(\d+)<.+<label>/i', $str, $match_care); $user_care = $match_care[1][0]; $user_be_careed = $match_care[1][1]; // 历史浏览量 // <span class="zg-gray-normal">个人主页被 <strong>17</strong> 人浏览</span> preg_match('/class=\"?zg-gray-normal\"?.+>(\d+)<.+span>/i', $str, $match_browse); $user_browse = $match_browse[1];
In the process of crawling, if possible, you must enter the database through redis, which can indeed improve the efficiency of crawling and warehousing. If there are no conditions, it can only be optimized through SQL. Here's some thoughts.
Be careful when designing indexes for database tables. In the process of spider crawling, it is recommended that the user name should not be indexed, and the left and right fields should not be indexed, including the primary key. 尽可能的提高入库效率
. Just imagine how much consumption it takes to build an index for 50 million data, adding one each time. After the crawling is completed and the data needs to be analyzed, indexes are created in batches.
Data storage and update operations must be performed in batches. The official suggestions and speed of additions, deletions and modifications given by mysql: http://dev.mysql.com/doc/refman/5.7/en/insert-speed.html
# 官方的最优批量插入 INSERT INTO yourtable VALUES (1,2), (5,5), ...;
Deployment operation. During the crawling process, the program may hang abnormally. In order to ensure efficiency and stability, write a timing script as much as possible. Kill it every once in a while and run it again, so that even if it hangs abnormally, it won't waste too much precious time. After all, time is money.
#!/bin/bash # 干掉 ps aux |grep spider |awk '{print $2}'|xargs kill -9 sleep 5s # 重新跑 nohup /home/cuixiaohuan/lamp/php5/bin/php /home/cuixiaohuan/php/zhihu_spider/spider_new.php &
数据分析呈现
数据的呈现主要使用echarts 3.0,感觉对于移动端兼容还不错。兼容移动端的页面响应式布局主要通过几个简单的css控制,代码如下
// 获取用户头像 preg_match('/<img.+src=\"?([^\s]+\.(jpg|gif|bmp|bnp|png))\"?.+>/i', $str, $match_img); $img_url = $match_img[1]; // 匹配用户名: // <span class="name">崔小拽</span> preg_match('/<span.+class=\"?name\"?>([\x{4e00}-\x{9fa5}]+).+span>/u', $str, $match_name); $user_name = $match_name[1]; // 匹配用户简介 // class bio span 中文 preg_match('/<span.+class=\"?bio\"?.+\>([\x{4e00}-\x{9fa5}]+).+span>/u', $str, $match_title); $user_title = $match_title[1]; // 匹配性别 //<input type="radio" name="gender" value="1" checked="checked" class="male"/> 男 // gender value1 ;结束 中文 preg_match('/<input.+name=\"?gender\"?.+value=\"?1\"?.+([\x{4e00}-\x{9fa5}]+).+\;/u', $str, $match_sex); $user_sex = $match_sex[1]; // 匹配地区 //<span class="location item" title="北京"> preg_match('/<span.+class=\"?location.+\"?.+\"([\x{4e00}-\x{9fa5}]+)\">/u', $str, $match_city); $user_city = $match_city[1]; // 匹配工作 //<span class="employment item" title="人见人骂的公司">人见人骂的公司</span> preg_match('/<span.+class=\"?employment.+\"?.+\"([\x{4e00}-\x{9fa5}]+)\">/u', $str, $match_employment); $user_employ = $match_employment[1]; // 匹配职位 // <span class="position item" title="程序猿"><a href="/topic/19590046" title="程序猿" class="topic-link" data-token="19590046" data-topicid="13253">程序猿</a></span> preg_match('/<span.+class=\"?position.+\"?.+\"([\x{4e00}-\x{9fa5}]+).+\">/u', $str, $match_position); $user_position = $match_position[1]; // 匹配学历 // <span class="education item" title="研究僧">研究僧</span> preg_match('/<span.+class=\"?education.+\"?.+\"([\x{4e00}-\x{9fa5}]+)\">/u', $str, $match_education); $user_education = $match_education[1]; // 工作情况 // <span class="education-extra item" title='挨踢'>挨踢</span> preg_match('/<span.+class=\"?education-extra.+\"?.+>([\x{4e00}- \x{9fa5}]+)</u', $str, $match_education_extra); $user_education_extra = $match_education_extra[1]; // 匹配关注话题数量 // class="zg-link-litblue"><strong>41 个话题</strong></a> preg_match('/class=\"?zg-link-litblue\"?><strong>(\d+)\s.+strong>/i', $str, $match_topic); $user_topic = $match_topic[1]; // 关注人数 // <span class="zg-gray-normal">关注了 preg_match_all('/<strong>(\d+)<.+<label>/i', $str, $match_care); $user_care = $match_care[1][0]; $user_be_careed = $match_care[1][1]; // 历史浏览量 // <span class="zg-gray-normal">个人主页被 <strong>17</strong> 人浏览</span> preg_match('/class=\"?zg-gray-normal\"?.+>(\d+)<.+span>/i', $str, $match_browse); $user_browse = $match_browse[1];
不足和待学习
整个过程中涉及php,shell,js,css,html,正则等语言和部署等基础知识,但还有诸多需要改进完善,小拽特此记录,后续补充例: