Home > Article > Web Front-end > How to implement a simple crawler using Node.js
Why choose to use node to write a crawler? It’s because the cheerio library is fully compatible with jQuery syntax. If you are familiar with it, it is really fun to use
cheerio: Node .js version of jQuery
http: encapsulates an HTTP server and a simple HTTP client
iconv-lite: Solve the problem of garbled characters when crawling gb2312 web pages
Since we want to crawl the website content, we should first take a look at the basic structure of the website
We selected Movie Paradise as the target website and want to crawl all the latest movie download links
The page structure is as follows:
We can see that the title of each movie is under a <a href="http://www.php.cn/wiki/164.html" target="_blank">class</a>
ulink
a
tag. Moving up, we can See that the outermost box class
is co_content8
ok, you can start work
First Introduce dependencies and set the url to be crawled
var cheerio = require('cheerio'); var http = require('http'); var iconv = require('iconv-lite'); var url = 'http://www.ygdy8.net/html/gndy/dyzz/index.html';
Core codeindex.js
http.get(url, function(sres) { var chunks = []; sres.on('data', function(chunk) { chunks.push(chunk); }); // chunks里面存储着网页的 html 内容,将它zhuan ma传给 cheerio.load 之后 // 就可以得到一个实现了 jQuery 接口的变量,将它命名为 `$` // 剩下就都是 jQuery 的内容了 sres.on('end', function() { var titles = []; //由于咱们发现此网页的编码格式为gb2312,所以需要对其进行转码,否则乱码 //依据:“<meta>” var html = iconv.decode(Buffer.concat(chunks), 'gb2312'); var $ = cheerio.load(html, {decodeEntities: false}); $('.co_content8 .ulink').each(function (idx, element) { var $element = $(element); titles.push({ title: $element.text() }) }) console.log(titles); }); });
Runnode index
The results are as follows
Successfully obtained the movie title. If I want to obtain the titles of multiple pages, it is impossible to change the URLs one by one. Of course there is a way to do this, please read on!
We only need to encapsulate the previous code into a functionand recursivelyexecute it and we are done
Core Codeindex.js
var index = 1; //页面数控制 var url = 'http://www.ygdy8.net/html/gndy/dyzz/list_23_'; var titles = []; //用于保存title function getTitle(url, i) { console.log("正在获取第" + i + "页的内容"); http.get(url + i + '.html', function(sres) { var chunks = []; sres.on('data', function(chunk) { chunks.push(chunk); }); sres.on('end', function() { var html = iconv.decode(Buffer.concat(chunks), 'gb2312'); var $ = cheerio.load(html, {decodeEntities: false}); $('.co_content8 .ulink').each(function (idx, element) { var $element = $(element); titles.push({ title: $element.text() }) }) if(i <p>The results are as follows<br><img src="https://img.php.cn/upload/article/000/000/013/79a1edcfe9244c5fb6f23f007f455aaf-2.png" style="max-width:90%" style="max-width:90%" title="How to implement a simple crawler using Node.js" alt="How to implement a simple crawler using Node.js"></p><h4>Get the movie download link</h4><p>If it is a manual operation, we need In one operation, you can find the download address by clicking into the movie details page <br> So how do we implement it through node </p><p> Let’s analyze the routine first <a href="http://www.php.cn/code/7955.html" target="_blank"> Page layout </a><br><img src="https://img.php.cn/upload/article/000/000/013/45e6e69669b80c60f0e7eabd78b3a018-3.png" style="max-width:90%" style="max-width:90%" title="How to implement a simple crawler using Node.js" alt="How to implement a simple crawler using Node.js"></p><p> If we want to accurately locate the download link, we need to first find the p with <code>id</code> as <code>Zoom</code>, and the download link is under this <code>p</code> Within the <code>a</code> tag under ##tr<code>. </code></p>Then we will <p>define a function<a href="http://www.php.cn/code/8119.html" target="_blank"> to get the download link</a></p>getBtLink()<p></p><pre class="brush:php;toolbar:false">function getBtLink(urls, n) { //urls里面包含着所有详情页的地址 console.log("正在获取第" + n + "个url的内容"); http.get('http://www.ygdy8.net' + urls[n].title, function(sres) { var chunks = []; sres.on('data', function(chunk) { chunks.push(chunk); }); sres.on('end', function() { var html = iconv.decode(Buffer.concat(chunks), 'gb2312'); //进行转码 var $ = cheerio.load(html, {decodeEntities: false}); $('#Zoom td').children('a').each(function (idx, element) { var $element = $(element); btLink.push({ bt: $element.attr('href') }) }) if(n Run again<p>node index<code></code><br><img src="https://img.php.cn/upload/article/000/000/013/2816c9cbd03b1466c255e54c10156e14-4.png" style="max-width:90%" style="max-width:90%" title="How to implement a simple crawler using Node.js" alt="How to implement a simple crawler using Node.js"><br><img src="https://img.php.cn/upload/article/000/000/013/8eb570e10f1a4e755ebffd92bd150760-5.png" style="max-width:90%" style="max-width:90%" title="How to implement a simple crawler using Node.js" alt="How to implement a simple crawler using Node.js"></p>In this way we have obtained the download links for all the movies in the three pages. Isn’t it very simple? <p></p>Save data<h2></h2>Of course we need to save the data after crawling it. Here I chose <p>MongoDB<a href="http://www.php.cn/wiki/1523.html" target="_blank"> to save it</a> </p>Data saving function<p>save()<code></code></p><pre class="brush:php;toolbar:false">function save() { var MongoClient = require('mongodb').MongoClient; //导入依赖 MongoClient.connect(mongo_url, function (err, db) { if (err) { console.error(err); return; } else { console.log("成功连接数据库"); var collection = db.collection('node-reptitle'); collection.insertMany(btLink, function (err,result) { //插入数据 if (err) { console.error(err); } else { console.log("保存数据成功"); } }) db.close(); } }); }The operation here is very simple, there is no need to use mongoose
Run again
node index
The above is the detailed content of How to implement a simple crawler using Node.js. For more information, please follow other related articles on the PHP Chinese website!