search
HomeWeb Front-endJS TutorialHow to implement Baidu index crawler function

This time I will show you how to implement the Baidu index crawler function and what are the precautions to implement the Baidu index crawler function. The following is a practical case, let's take a look.

I have read an imaginative article before, which introduced the front-end anti-crawling techniques of various major manufacturers, but as this article said, there is no 100% anti-crawling method. This article introduces a simple method to bypass All these front-end anti-crawler measures.

The following code takes Baidu Index as an example. The code has been packaged into a Baidu Index crawler node library:

https://github.com/Coffcer/baidu-index-spider

note: Please do not abuse crawlers to cause trouble to others

Baidu Index’s anti-crawler strategy

Observe the interface of Baidu Index. The index data is a trend chart. When the mouse is hovered over a certain day, two requests will be triggered and the results will be displayed in the floating box

It can be found that Baidu Index has actually implemented certain anti-crawler strategies on the front end. When the mouse moves over the chart, two requests will be triggered, one request returns a piece of html, and one request returns a generated image. The html does not contain actual values, but displays the corresponding characters on the image by setting width and

margin-left. And request parameters contain parameters such as res and res1 that we don’t know how to simulate, so it is difficult to crawl to Baidu Index data using conventional simulated requests or HTML crawling methods.

Crawler Idea

How to break through Baidu's anti-crawler method is actually very simple, just don't care about how it anti-crawler. We only need to simulate user operations, screenshot the required values, and do image recognition. The steps are probably:

  1. Simulate login

  2. Open the index page

  3. Move the mouse to the specified date

  4. Wait for the request to end and intercept the numerical part of the picture

  5. Image recognition gets the value

  6. Loop through steps 3 to 5 to get the value corresponding to each date

This method can theoretically crawl the content of any website. Next, we will implement the crawler step by step. The following libraries will be used:

  1. puppeteer Simulate browser operation

  2. node-tesseract Encapsulation of tesseract, used for image recognition

  3. jimp Image cropping

Install Puppeteer, simulate user operations

Puppeteer is a Chrome automation tool produced by the Google Chrome team, used to control Chrome execution commands. You can simulate user operations, do automated testing, crawlers, etc. Usage is very simple. There are many introductory tutorials on the Internet. You can probably know how to use it after reading this article.

API documentation: https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md

Installation:

npm install --save puppeteer
Puppeteer automatically downloads Chromium during installation to ensure it runs properly. However, domestic networks may not be able to successfully download Chromium. If the download fails, you can use cnpm to install it, or change the download address to the Taobao mirror and then install it:

npm config set PUPPETEER_DOWNLOAD_HOST=https://npm.taobao.org/mirrors
npm install --save puppeteer
You can also skip the Chromium download during installation and run it by specifying the native Chrome path through code:

// npm
npm install --save puppeteer --ignore-scripts
// node
puppeteer.launch({ executablePath: '/path/to/Chrome' });

accomplish

To keep the layout tidy, only the main parts are listed below. The parts of the code involving the selector are replaced by.... For the complete code, please refer to the github repository at the top of the article.

Open Baidu Index page and simulate login

What is done here is to simulate user operations, click and input step by step. There is no handling of the login

Verification Code. Handling the verification code is another topic. If you have logged into Baidu locally, you generally do not need a verification code.

// 启动浏览器,
// headless参数如果设置为true,Puppeteer将在后台操作你Chromium,换言之你将看不到浏览器的操作过程
// 设为false则相反,会在你电脑上打开浏览器,显示浏览器每一操作。
const browser = await puppeteer.launch({headless:false});
const page = await browser.newPage();
// 打开百度指数
await page.goto(BAIDU_INDEX_URL);
// 模拟登陆
await page.click('...');
await page.waitForSelecto('...');
// 输入百度账号密码然后登录
await page.type('...','username');
await page.type('...','password');
await page.click('...');
await page.waitForNavigation();
console.log(':white_check_mark: 登录成功');

Simulate moving the mouse and obtain the required data

需要将页面滚动到趋势图的区域,然后移动鼠标到某个日期上,等待请求结束,tooltip显示数值,再截图保存图片。

// 获取chart第一天的坐标
const position = await page.evaluate(() => {
 const $image = document.querySelector('...');
 const $area = document.querySelector('...');
 const areaRect = $area.getBoundingClientRect();
 const imageRect = $image.getBoundingClientRect();
 // 滚动到图表可视化区域
 window.scrollBy(0, areaRect.top);
 return { x: imageRect.x, y: 200 };
});
// 移动鼠标,触发tooltip
await page.mouse.move(position.x, position.y);
await page.waitForSelector('...');
// 获取tooltip信息
const tooltipInfo = await page.evaluate(() => {
 const $tooltip = document.querySelector('...');
 const $title = $tooltip.querySelector('...');
 const $value = $tooltip.querySelector('...');
 const valueRect = $value.getBoundingClientRect();
 const padding = 5;
 return {
 title: $title.textContent.split(' ')[0],
 x: valueRect.x - padding,
 y: valueRect.y,
 width: valueRect.width + padding * 2,
 height: valueRect.height
 }
});

截图

计算数值的坐标,截图并用jimp对裁剪图片。

await page.screenshot({ path: imgPath });
// 对图片进行裁剪,只保留数字部分
const img = await jimp.read(imgPath);
await img.crop(tooltipInfo.x, tooltipInfo.y, tooltipInfo.width, tooltipInfo.height);
// 将图片放大一些,识别准确率会有提升
await img.scale(5);
await img.write(imgPath);

图像识别

这里我们用Tesseract来做图像识别,Tesseracts是Google开源的一款OCR工具,用来识别图片中的文字,并且可以通过训练提高准确率。github上已经有一个简单的node封装: node-tesseract ,需要你先安装Tesseract并设置到环境变量。

Tesseract.process(imgPath, (err, val) => {
if (err || val == null) {
 console.error(':x: 识别失败:' + imgPath);
 return;
}
console.log(val);

实际上未经训练的Tesseracts识别起来会有少数几个错误,比如把9开头的数字识别成`3,这里需要通过训练去提升Tesseracts的准确率,如果识别过程出现的问题都是一样的,也可以简单通过正则去修复这些问题。

封装

实现了以上几点后,只需组合起来就可以封装成一个百度指数爬虫node库。当然还有许多优化的方法,比如批量爬取,指定天数爬取等,只要在这个基础上实现都不难了。

const recognition = require('./src/recognition');
const Spider = require('./src/spider');
module.exports = {
 async run (word, options, puppeteerOptions = { headless: true }) {
 const spider = new Spider({ 
 imgDir, 
 ...options 
 }, puppeteerOptions);
 // 抓取数据
 await spider.run(word);
 // 读取抓取到的截图,做图像识别
 const wordDir = path.resolve(imgDir, word);
 const imgNames = fs.readdirSync(wordDir);
 const result = [];
 imgNames = imgNames.filter(item => path.extname(item) === '.png');
 for (let i = 0; i <p style="text-align: left;">
	<strong>反爬虫</strong></p><p style="text-align: left;">
	最后,如何抵挡这种爬虫呢,个人认为通过判断鼠标移动轨迹可能是一种方法。当然前端没有100%的反爬虫手段,我们能做的只是给爬虫增加一点难度。</p><p>相信看了本文案例你已经掌握了方法,更多精彩请关注php中文网其它相关文章!</p><p>推荐阅读:</p><p><a href="http://www.php.cn/js-tutorial-392313.html" target="_blank">easyui日期时间框在IE中的兼容性如何处理</a><br></p><p><a href="http://www.php.cn/js-tutorial-392309.html" target="_blank">vue判断input输入内容有否有空格</a><br></p><!--content end-->

The above is the detailed content of How to implement Baidu index crawler function. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
JavaScript Engines: Comparing ImplementationsJavaScript Engines: Comparing ImplementationsApr 13, 2025 am 12:05 AM

Different JavaScript engines have different effects when parsing and executing JavaScript code, because the implementation principles and optimization strategies of each engine differ. 1. Lexical analysis: convert source code into lexical unit. 2. Grammar analysis: Generate an abstract syntax tree. 3. Optimization and compilation: Generate machine code through the JIT compiler. 4. Execute: Run the machine code. V8 engine optimizes through instant compilation and hidden class, SpiderMonkey uses a type inference system, resulting in different performance performance on the same code.

Beyond the Browser: JavaScript in the Real WorldBeyond the Browser: JavaScript in the Real WorldApr 12, 2025 am 12:06 AM

JavaScript's applications in the real world include server-side programming, mobile application development and Internet of Things control: 1. Server-side programming is realized through Node.js, suitable for high concurrent request processing. 2. Mobile application development is carried out through ReactNative and supports cross-platform deployment. 3. Used for IoT device control through Johnny-Five library, suitable for hardware interaction.

Building a Multi-Tenant SaaS Application with Next.js (Backend Integration)Building a Multi-Tenant SaaS Application with Next.js (Backend Integration)Apr 11, 2025 am 08:23 AM

I built a functional multi-tenant SaaS application (an EdTech app) with your everyday tech tool and you can do the same. First, what’s a multi-tenant SaaS application? Multi-tenant SaaS applications let you serve multiple customers from a sing

How to Build a Multi-Tenant SaaS Application with Next.js (Frontend Integration)How to Build a Multi-Tenant SaaS Application with Next.js (Frontend Integration)Apr 11, 2025 am 08:22 AM

This article demonstrates frontend integration with a backend secured by Permit, building a functional EdTech SaaS application using Next.js. The frontend fetches user permissions to control UI visibility and ensures API requests adhere to role-base

JavaScript: Exploring the Versatility of a Web LanguageJavaScript: Exploring the Versatility of a Web LanguageApr 11, 2025 am 12:01 AM

JavaScript is the core language of modern web development and is widely used for its diversity and flexibility. 1) Front-end development: build dynamic web pages and single-page applications through DOM operations and modern frameworks (such as React, Vue.js, Angular). 2) Server-side development: Node.js uses a non-blocking I/O model to handle high concurrency and real-time applications. 3) Mobile and desktop application development: cross-platform development is realized through ReactNative and Electron to improve development efficiency.

The Evolution of JavaScript: Current Trends and Future ProspectsThe Evolution of JavaScript: Current Trends and Future ProspectsApr 10, 2025 am 09:33 AM

The latest trends in JavaScript include the rise of TypeScript, the popularity of modern frameworks and libraries, and the application of WebAssembly. Future prospects cover more powerful type systems, the development of server-side JavaScript, the expansion of artificial intelligence and machine learning, and the potential of IoT and edge computing.

Demystifying JavaScript: What It Does and Why It MattersDemystifying JavaScript: What It Does and Why It MattersApr 09, 2025 am 12:07 AM

JavaScript is the cornerstone of modern web development, and its main functions include event-driven programming, dynamic content generation and asynchronous programming. 1) Event-driven programming allows web pages to change dynamically according to user operations. 2) Dynamic content generation allows page content to be adjusted according to conditions. 3) Asynchronous programming ensures that the user interface is not blocked. JavaScript is widely used in web interaction, single-page application and server-side development, greatly improving the flexibility of user experience and cross-platform development.

Is Python or JavaScript better?Is Python or JavaScript better?Apr 06, 2025 am 12:14 AM

Python is more suitable for data science and machine learning, while JavaScript is more suitable for front-end and full-stack development. 1. Python is known for its concise syntax and rich library ecosystem, and is suitable for data analysis and web development. 2. JavaScript is the core of front-end development. Node.js supports server-side programming and is suitable for full-stack development.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.