Home >Java >javaTutorial >Efficiently scraping JavaScript websites
Static websites: Axios and Cheerio
Let's walk through crawling a static eCommerce website using JavaScript. For this example, we'll use two popular libraries: Axios for HTTP requests and Cheerio for parsing HTML.
*1. Install dependencies *
Install Axios and Cheerio with npm:
npm install axios cheerio
*2. Create script *
Create a JavaScript file, e.g. B. scrapeEcommerce.js and open it in your code editor.
*3. Import modules *
Import Axios and Cheerio into your script:
const axios = require('axios');
const cheerio = require('cheerio');
*4. Define target URL *
Select the eCommerce website you want to access. In this example we use the hypothetical URL http://example-ecommerce.com. Replace this with the desired URL:
const url = 'http://example-ecommerce.com';
*5. Get HTML content *
Use Axios to send a GET request to the target URL and get the HTML content:
axios.get(url)
.then(response => {
const html = response.data;
// HTML content can now be parsed
})
.catch(error => {
console.error('Error fetching the page:', error);
});
*6. Parse HTML and extract data *
Use Cheerio to parse the HTML code and extract the information you want, such as product names and prices:
axios.get(url)
.then(response => {
const html = response.data;
const $ = cheerio.load(html);
const products = []; $('.product').each((index, element) => { const name = $(element).find('.product-name').text().trim(); const price = $(element).find('.product-price').text().trim(); products.push({ name, price }); }); console.log(products);
})
.catch(error => {
console.error('Error fetching the page:', error);
});
*Most important points *
*Full example script: *
const axios = require('axios');
const cheerio = require('cheerio');
const url = 'http://example-ecommerce.com';
axios.get(url)
.then(response => {
const html = response.data;
const $ = cheerio.load(html);
const products = []; $('.product').each((index, element) => { const name = $(element).find('.product-name').text().trim(); const price = $(element).find('.product-price').text().trim(); products.push({ name, price }); }); console.log(products);
})
.catch(error => {
console.error('Error fetching the page:', error);
});
*Customizations for your landing page: *
If you recently need Python, Ruby, or another programming language for web scraping, Octoparse is an excellent tool, especially for websites with JavaScript support.
Let's take a concrete example: If you have a target website and want to start scraping, you should first check whether the site is blocked against JS scraping. Different websites use different protection methods, and it may take some time and frustrating trials before you realize that something is wrong, especially if scraping doesn't produce the desired results. However, with a web scraping tool, the data extraction process goes smoothly.
Many web scraping tools save you from writing crawlers. Octoparse is particularly efficient at scraping JavaScript-heavy pages and can extract data from 99% of web pages, including those using Ajax. It also offers Captcha solving services. Octoparse is free to use and offers an auto-discovery feature and over 100 easy-to-use templates that enable efficient data extraction. New users can also take advantage of a 14-day trial.
The above is the detailed content of Efficiently scraping JavaScript websites. For more information, please follow other related articles on the PHP Chinese website!