Home >Backend Development >Python Tutorial >Scrapy is a data scraping application that comes with crawler templates

Scrapy is a data scraping application that comes with crawler templates

WBOY
WBOYOriginal
2023-06-22 09:24:06829browse

With the continuous development of Internet technology, crawler technology has also been widely used. Crawler technology can automatically crawl data on the Internet and store it in a database, providing convenience for data analysis and data mining. As a very famous crawler framework in Python, Scrapy comes with some common crawler templates, which can quickly crawl data on the target website and automatically save it to a local or cloud database. This article will introduce how to use Scrapy's own crawler template to crawl data, and how to clean, parse and store data during the crawling process.

1. Introduction to Scrapy crawler templates

Scrapy comes with some crawler templates, including basic Spider templates, CrawlSpider templates and XmlFeedSpider templates. Spider template is the most basic crawler template. Its advantage is that it has a wide range of applications and is easy to use. The CrawlSpider template is a rule-based crawler template that can quickly crawl multi-level links and supports custom rules. The XmlFeedSpider template is a crawler template suitable for XML format. Using these templates for data crawling can greatly reduce the development difficulty for programmers and improve crawling efficiency.

2. Scrapy crawler template application

The following is a practical example to illustrate how to use Scrapy’s own Spider template to crawl data. The target website we are going to crawl is a movie information website, and the latest movie information is listed on the homepage of the website. We need to crawl the movie's name, director, actors, ratings and other information from this website and save it to a local database.

  1. Create Scrapy project

First, you need to open the command line window, switch to the target working directory, and then enter the following command:

scrapy startproject movies

This command A Scrapy project named movies will be created. The project directory will contain a subdirectory named spiders, which is used to place the crawler program.

  1. Create Spider Template

In the project directory, use the following command to create a Spider named movie_spider:

scrapy genspider movie_spider www.movies.com

This command will automatically Generate a program based on the Spider template, where www.movies.com represents the domain name of the target website. In the spiders directory, a file named movie_spider.py will appear with the following content:

import scrapy

class MovieSpider(scrapy.Spider):
    name = 'movie_spider'
    allowed_domains = ['www.movies.com']
    start_urls = ['http://www.movies.com/']

    def parse(self, response):
        pass

This is the most basic Spider program. Among them, name represents the name of the crawler, allowed_domains represents the list of domain names that are allowed to be crawled, and start_urls represents the list of starting crawling URLs. In the parse method, we need to write code for data parsing and grabbing.

  1. Data capture and analysis

We need to write code to capture and parse the data of the target website from the response object. For the movie information website just mentioned, we can use XPath or CSS selectors to locate elements on the page. Assuming that the movie name is stored in a div element with class movie-name on the page, then we can use the following code to extract all movie names:

def parse(self, response):
    movies = response.xpath('//div[@class="movie-name"]/text()').extract()
    for movie in movies:
        yield {'name': movie}

Here, we use XPath syntax to locate All div elements with class movie-name are used, and the extract method is used to extract the text content in the elements. Next, we use a for loop to yield each movie name as the output of the generator.

Similarly, we can locate other elements we are interested in through XPath or CSS selectors. For example, director and actor information may be stored in a div element with class director, and rating information may be stored in a div element with class rate.

  1. Data Storage

In the Spider program, we need to write code to save the captured data to a local or cloud database. Scrapy supports saving data to a variety of different databases, including MySQL, PostgreSQL, MongoDB, etc.

For example, we can use a MySQL database to save movie information. In the spiders directory, we can create a file named mysql_pipeline.py, which contains the following code:

import pymysql

class MysqlPipeline(object):
    def __init__(self):
        self.conn = pymysql.connect(host='localhost', user='root', passwd='123456', db='movie_db', charset='utf8')

    def process_item(self, item, spider):
        cursor = self.conn.cursor()
        sql = "INSERT INTO movie(name, director, actors, rate) VALUES(%s, %s, %s, %s)"
        cursor.execute(sql, (item['name'], item['director'], item['actors'], item['rate']))
        self.conn.commit()

    def __del__(self):
        self.conn.close()

This program will save the implementation data to the MySQL database, where movie_db is the database name, and the movie table will contain The four fields of name, director, actors, and rate are used to store the movie name, director, actors, and rating information. The process_item method is used to save the items generated in the Spider program to the database.

In order to use the mysql_pipeline.py file, we also need to add the following configuration to the settings.py file:

ITEM_PIPELINES = {
    'movies.spiders.mysql_pipeline.MysqlPipeline': 300
}

Here, 'movies.spiders.mysql_pipeline.MysqlPipeline' specifies the mysql_pipeline.py file location and class name. The number 300 indicates the priority of data processing. The smaller the number, the higher the priority.

  1. Run the Scrapy program

In the spiders directory, execute the following command to run the Scrapy program:

scrapy crawl movie_spider

This command will start the movie_spider The crawler program starts to crawl the data of the target website and stores it into the MySQL database.

3. Summary

This article introduces how to use Scrapy's own crawler templates to crawl data, including Spider templates, CrawlSpider templates and XmlFeedSpider templates. We take a practical example to illustrate how to use Spider templates to capture and parse data, and save the results to a MySQL database. Using Scrapy for data capture can greatly improve the efficiency and quality of data collection, and provide strong support for subsequent data analysis, data mining and other work.

The above is the detailed content of Scrapy is a data scraping application that comes with crawler templates. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn