Home >Web Front-end >JS Tutorial >Node.js development information crawler process code sharing

Node.js development information crawler process code sharing

小云云
小云云Original
2018-01-09 09:38:051259browse

This article mainly introduces the process of developing information crawlers using Node.js. The crawler process can be summarized as downloading the HTML of the target website to the local and then extracting the data. Please refer to this article for specific details, I hope it can help you.

The recent project needs some information, because the project is written in Node.js, so it is natural to use Node.js to write the crawler

Project address: github.com/mrtanweijie… , the project crawled the information content of Readhub, Open Source China, Developer Toutiao, and 36Kr. It has not processed multiple pages for the time being, because the crawler will run once a day. Now it can meet the needs every time it gets the latest. , and will be improved later.

The crawler process can be summarized as downloading the HTML of the target website to the local and then extracting the data.

1. Download page

Node.js has many http request libraries, request is used here, the main code is as follows:

requestDownloadHTML () {
 const options = {
  url: this.url,
  headers: {
  'User-Agent': this.randomUserAgent()
  }
 }
 return new Promise((resolve, reject) => {
  request(options, (err, response, body) => {
  if (!err && response.statusCode === 200) {
   return resolve(body)
  } else {
   return reject(err)
  }
  })
 })
 }

Use Promise for packaging, which is convenient Use async/await when using it later. Because many websites are rendered on the client, the downloaded pages may not necessarily contain the desired HTML content. We can use Google's puppeteer to download client-rendered website pages. As we all know, when using npm i, puppeteer may fail to install because it needs to download the Chrome kernel. Just try a few more times:)

puppeteerDownloadHTML () {
 return new Promise(async (resolve, reject) => {
  try {
  const browser = await puppeteer.launch({ headless: true })
  const page = await browser.newPage()
  await page.goto(this.url)
  const bodyHandle = await page.$('body')
  const bodyHTML = await page.evaluate(body => body.innerHTML, bodyHandle)
  return resolve(bodyHTML)
  } catch (err) {
  console.log(err)
  return reject(err)
  }
 })
 }

Of course, it is best to render the page directly on the client side Use the interface request method, so that the subsequent HTML parsing is not needed. Simply encapsulate it and then use it like this: #Funny:)

await new Downloader('http://36kr.com/newsflashes', DOWNLOADER.puppeteer).downloadHTML()

2. HTML content extraction

HTML content extraction is of course using the artifact cheerio. Cheerio exposes the same interface as jQuery and is very simple to use. Open the page F12 in the browser to view the extracted page element nodes, and then extract the content according to the needs

readHubExtract () {
 let nodeList = this.$('#itemList').find('.enableVisited')
 nodeList.each((i, e) => {
  let a = this.$(e).find('a')
  this.extractData.push(
  this.extractDataFactory(
   a.attr('href'),
   a.text(),
   '',
   SOURCECODE.Readhub
  )
  )
 })
 return this.extractData
 }

3. Scheduled tasks

cron 每天跑一跑 
function job () {
 let cronJob = new cron.CronJob({
 cronTime: cronConfig.cronTime,
 onTick: () => {
  spider()
 },
 start: false
 })
 cronJob.start()
}

4. Data persistence

Theoretically, data persistence should not be within the scope of concern of crawlers. Use mongoose to create Model

import mongoose from 'mongoose'
const Schema = mongoose.Schema
const NewsSchema = new Schema(
 {
 title: { type: 'String', required: true },
 url: { type: 'String', required: true },
 summary: String,
 recommend: { type: Boolean, default: false },
 source: { type: Number, required: true, default: 0 },
 status: { type: Number, required: true, default: 0 },
 createdTime: { type: Date, default: Date.now }
 },
 {
 collection: 'news'
 }
)
export default mongoose.model('news', NewsSchema)

Basic operations

import { OBJ_STATUS } from '../../Constants'
class BaseService {
 constructor (ObjModel) {
 this.ObjModel = ObjModel
 }

 saveObject (objData) {
 return new Promise((resolve, reject) => {
  this.ObjModel(objData).save((err, result) => {
  if (err) {
   return reject(err)
  }
  return resolve(result)
  })
 })
 }
}
export default BaseService

Information

import BaseService from './BaseService'
import News from '../models/News'
class NewsService extends BaseService {}
export default new NewsService(News)

Save happily Data

await newsService.batchSave(newsListTem)

For more information, just go to Github and clone the project to see it.

Related recommendations:

NodeJS Encyclopedia Crawler Example Tutorial

Related issues in solving crawler problems

nodeJS implementation of web crawler function example code

The above is the detailed content of Node.js development information crawler process code sharing. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn