Not every site has an API to access data from it. Most don’t, in fact. If you need to pull that data, one approach is to “scrape” it. That is, load the page in web browser (that you automate), find what you are looking for in the DOM, and take it.
You can do this yourself if you want to deal with the cost, maintenance, and technical debt. For example, this is one of the big use-cases for “headless” browsers, like how Puppeteer can spin up and control headless Chrome.
Or, you can use a tool like scrapestack that is a ready-to-use API that not only does the scraping for you, but likely does it better, faster, and with more options than trying to do it yourself.
Say my goal is to pull the latest completed meetup from a Meetup.com page. Meetup.com has an API, but it’s pricy and requires OAuth and stuff. All we need is the name and link of a past meetup here, so let’s just yank it off the page.
We can see what we need in the DOM:
To have a play, let’s scrape it with the scrapestack API client-side with jQuery:
$.get('https://api.scrapestack.com/scrape', { access_key: 'MY_API_KEY', url: 'https://www.meetup.com/BendJS/' }, function(websiteContent) { // we have the entire sites HTML here! } );
Within that callback, I can now also use jQuery to traverse the DOM, snagging the pieces I want, and constructing what I need on our site:
// Get what we want let event = $(websiteContent) .find(".groupHome-eventsList-pastEvents .eventCard") .first(); let eventTitle = event .find(".eventCard--link")[0].innerText; let eventLink = `https://www.meetup.com/` event.find(".eventCard--link").attr("href"); // Use it on page $("#event").append(` ${eventTitle} `);
In real usage, if we were doing it client-side like this, we’d make use of some rudimentary storage so we wouldn’t have to hit the API on every page load, like sticking the result in localStorage and invalidating after a few days or something.
It works!
It’s actually much more likely that we do our scraping server-side. For one thing, that’s the way to protect your API keys, which is your responsibility, and not really possible on a public-facing site if you’re using the API directly client-side.
Myself, I’d probably make a cloud function to do it, so I can stay in JavaScript (Node.js), and have the opportunity to tuck the data in storage somewhere.
I’d say go check out the documentation and see if this isn’t the right answer next time you need to do some scraping. You get 10,000 requests on the free plan to try it out anyway, and it jumps up a ton on any of the paid plans with more features.
Direct Link →
The above is the detailed content of scrapestack: An API for Scraping Sites. For more information, please follow other related articles on the PHP Chinese website!

For a while, iTunes was the big dog in podcasting, so if you linked "Subscribe to Podcast" to like:

We lost Opera when they went Chrome in 2013. Same deal with Edge when it also went Chrome earlier this year. Mike Taylor called these changes a "Decreasingly

From trashy clickbait sites to the most august of publications, share buttons have long been ubiquitous across the web. And yet it is arguable that these

In this week's roundup, Apple gets into web components, how Instagram is insta-loading scripts, and some food for thought for self-hosting critical resources.

When I was looking through the documentation of git commands, I noticed that many of them had an option for . I initially thought that this was just a

Sounds kind of like a hard problem doesn't it? We often don't have product shots in thousands of colors, such that we can flip out the with . Nor do we

I like when websites have a dark mode option. Dark mode makes web pages easier for me to read and helps my eyes feel more relaxed. Many websites, including

This is me looking at the HTML element for the first time. I've been aware of it for a while, but haven't taken it for a spin yet. It has some pretty cool and


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Zend Studio 13.0.1
Powerful PHP integrated development environment

SublimeText3 Linux new version
SublimeText3 Linux latest version

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.