Home >Backend Development >Python Tutorial >How to use Scrapy to build an efficient crawler program
How to use Scrapy to build an efficient crawler program
With the advent of the information age, the amount of data on the Internet continues to increase, and the demand for obtaining large amounts of data is also increasing. And crawlers have become one of the best solutions to this need. As an excellent Python crawler framework, Scrapy is efficient, stable and easy to use, and is widely used in various fields. This article will introduce how to use Scrapy to build an efficient crawler program and give code examples.
Scrapy's crawler program mainly consists of the following components:
In Scrapy, we need to create a new crawler project to write our crawler program. Execute the following command in the command line:
scrapy startproject myspider
This will create a project folder named "myspider" and contain some default files and folders. We can enter the folder and create a new crawler:
cd myspider scrapy genspider example example.com
This will create a crawler named "example" to crawl data from the "example.com" website. We can write specific crawler logic in the generated "example_spider.py" file.
Here is a simple example for crawling news headlines and links on a website.
import scrapy class ExampleSpider(scrapy.Spider): name = 'example' allowed_domains = ['example.com'] start_urls = ['http://www.example.com/news'] def parse(self, response): for news in response.xpath('//div[@class="news-item"]'): yield { 'title': news.xpath('.//h2/text()').get(), 'link': news.xpath('.//a/@href').get(), } next_page = response.xpath('//a[@class="next-page"]/@href').get() if next_page: yield response.follow(next_page, self.parse)
In the above code, we define a crawler class named "ExampleSpider", which contains three attributes: name represents the name of the crawler, allowed_domains represents the domain name that is allowed to crawl the website, and start_urls represents the starting point URL. Then we rewrote the parse method, which parses the web page content, extracts news titles and links, and returns the results using yield.
In Scrapy, we can pipeline the crawled data through the project pipeline. Data can be stored in a database, written to a file, or otherwise processed later.
Open the "settings.py" file in the project folder, find the ITEM_PIPELINES configuration item in it, and uncomment it. Then add the following code:
ITEM_PIPELINES = { 'myspider.pipelines.MyPipeline': 300, }
This will enable the custom pipeline class "my spider.pipelines.MyPipeline" and specify a priority (the lower the number, the higher the priority).
Next, we need to create a pipeline class to process the data. Create a file named "pipelines.py" in the project folder and add the following code:
import json class MyPipeline: def open_spider(self, spider): self.file = open('news.json', 'w') def close_spider(self, spider): self.file.close() def process_item(self, item, spider): line = json.dumps(dict(item)) + " " self.file.write(line) return item
In this example, we define a pipeline class named "MyPipeline" which contains three Methods: open_spider, close_spider and process_item. In the open_spider method, we open a file to store the data. In the close_spider method, we close the file. In the process_item method, we convert the data into JSON format and write it to the file.
After completing the writing of the crawler program and project pipeline, we can execute the following command on the command line to run the crawler program:
scrapy crawl example
This will launch the crawler named "example" and start crawling data. The crawled data will be processed as we defined it in the pipeline class.
The above is the basic process and sample code for using Scrapy to build an efficient crawler program. Of course, Scrapy also offers many other features and options that can be adjusted and extended according to specific needs. I hope this article can help readers better understand and use Scrapy and build efficient crawler programs.
The above is the detailed content of How to use Scrapy to build an efficient crawler program. For more information, please follow other related articles on the PHP Chinese website!