Home > Article > Backend Development > Learn Scrapy: Basics to Advanced
Scrapy installation tutorial: from entry to proficiency, specific code examples are required
Introduction:
Scrapy is a powerful Python open source web crawler framework that is available It is used for a series of tasks such as crawling web pages, extracting data, performing data cleaning and persistence, etc. This article will take you step by step through the Scrapy installation process and provide specific code examples to help you go from getting started to becoming proficient in the Scrapy framework.
1. Install Scrapy
To install Scrapy, first make sure you have installed Python and pip. Then, open a command line terminal and enter the following command to install:
pip install scrapy
The installation process may take some time, please be patient. If you have permission issues, you can try prefixing the command with sudo
.
2. Create a Scrapy project
After the installation is complete, we can use Scrapy’s command line tool to create a new Scrapy project. In the command line terminal, go to the directory where you want to create the project and execute the following command:
scrapy startproject tutorial
This will create a Scrapy project folder named "tutorial" in the current directory. Entering the folder, we can see the following directory structure:
tutorial/ scrapy.cfg tutorial/ __init__.py items.py middlewares.py pipelines.py settings.py spiders/ __init__.py
Among them, scrapy.cfg
is the configuration file of the Scrapy project, and the tutorial
folder is our own code folder.
3. Define crawlers
In Scrapy, we use spiders to define rules for crawling web pages and extracting data. Create a new Python file in the spiders
directory, name it quotes_spider.py
(you can name it according to your actual needs), and then use the following code to define a simple crawler:
import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" start_urls = [ 'http://quotes.toscrape.com/page/1/', ] def parse(self, response): for quote in response.css('div.quote'): yield { 'text': quote.css('span.text::text').get(), 'author': quote.css('span small::text').get(), } next_page = response.css('li.next a::attr(href)').get() if next_page is not None: yield response.follow(next_page, self.parse)
In the above code, we created a crawler named QuotesSpider
. Among them, the name
attribute is the name of the crawler, the start_urls
attribute is the URL of the first page we want to crawl, and the parse
method is the default parsing method of the crawler. , used to parse web pages and extract data.
4. Run the crawler
In the command line terminal, enter the root directory of the project (i.e. tutorial
folder) and execute the following command to start the crawler and start crawling data :
scrapy crawl quotes
The crawler will start to crawl the page in the initial URL, and parse and extract data according to the rules we defined.
5. Save data
Under normal circumstances, we will save the captured data. In Scrapy, we can use Item Pipeline to clean, process and store data. In the pipelines.py
file, add the following code:
import json class TutorialPipeline: def open_spider(self, spider): self.file = open('quotes.json', 'w') def close_spider(self, spider): self.file.close() def process_item(self, item, spider): line = json.dumps(dict(item)) + " " self.file.write(line) return item
In the above code, we have created an Item Pipeline named TutorialPipeline
. Among them, the open_spider
method will be called when the crawler starts to initialize the file; the close_spider
method will be called when the crawler ends to close the file; process_item
The method will process and save each captured data item.
6. Configure the Scrapy project
In the settings.py
file, you can configure various configurations for the Scrapy project. The following are some commonly used configuration items:
ROBOTSTXT_OBEY
: whether to comply with the robots.txt protocol; USER_AGENT
: set the user agent, Different browsers can be simulated in the crawler; ITEM_PIPELINES
: Enable and configure Item Pipeline; DOWNLOAD_DELAY
: Set download delay to avoid Cause excessive pressure on the target website;7. Summary
Through the above steps, we have completed the installation and use of Scrapy. I hope this article can help you go from getting started to becoming proficient in the Scrapy framework. If you want to further learn more advanced functions and usage of Scrapy, please refer to Scrapy official documentation and practice and explore based on actual projects. I wish you success in the world of reptiles!
The above is the detailed content of Learn Scrapy: Basics to Advanced. For more information, please follow other related articles on the PHP Chinese website!