search
HomeBackend DevelopmentPython TutorialHow to use Scrapy to crawl Douban books and their ratings and comments?

With the development of the Internet, people increasingly rely on the Internet to obtain information. For book lovers, Douban Books has become an indispensable platform. In addition, Douban Books also provides a wealth of book ratings and reviews, allowing readers to understand a book more comprehensively. However, obtaining this information manually is tantamount to finding a needle in a haystack. At this time, we can use the Scrapy tool to crawl the data.

Scrapy is an open source web crawler framework based on Python that helps us extract data from websites efficiently. In this article, I will focus on the steps and introduce in detail how to use Scrapy to crawl Douban books and their ratings and comments.

Step One: Install Scrapy

First, you need to install Scrapy on your computer. If you have installed pip (Python package management tool), you only need to enter the following command in the terminal or command line:

pip install scrapy

In this way, Scrapy will be installed on your computer. If an error or warning occurs, it is recommended to make appropriate adjustments according to the prompts.

Step 2: Create a new Scrapy project

Next, we need to enter the following command in the terminal or command line to create a new Scrapy project:

scrapy startproject douban

This command will be in Create a folder named douban in the current directory, which contains Scrapy's basic files and directory structure.

Step 3: Write a crawler program

In Scrapy, we need to write a crawler program to tell Scrapy how to extract data from the website. Therefore, we need to create a new file named douban_spider.py and write the following code:

import scrapy

class DoubanSpider(scrapy.Spider):
    name = 'douban'
    allowed_domains = ['book.douban.com']
    start_urls = ['https://book.douban.com/top250']

    def parse(self, response):
        selector = scrapy.Selector(response)
        books = selector.xpath('//tr[@class="item"]')
        for book in books:
            title = book.xpath('td[2]/div[1]/a/@title').extract_first()
            author = book.xpath('td[2]/div[1]/span[1]/text()').extract_first()
            score = book.xpath('td[2]/div[2]/span[@class="rating_nums"]/text()').extract_first()
            comment_count = book.xpath('td[2]/div[2]/span[@class="pl"]/text()').extract_first()
            comment_count = comment_count.strip('()')
            yield {'title': title, 'author': author, 'score': score, 'comment_count': comment_count}

The above code implements two functions:

  1. Crawling the book titles, authors, ratings and number of reviews in the top 250 pages of Douban Books.
  2. Return the crawled data in the form of a dictionary.

In this program, we first need to define a DoubanSpider class and specify the name of the crawler, the domain name and starting URL that the crawler is allowed to access. In the parse method, we parse the HTML page through the scrapy.Selector object and use XPath expressions to obtain relevant information about the book.

After obtaining the data, we use the yield keyword to return the data in the form of a dictionary. The yield keyword here is to turn the function into a generator to achieve the effect of returning one data at a time. In Scrapy, we can achieve efficient crawling of website data by defining generators.

Step 4: Run the crawler program

After writing the crawler program, we need to run the following code in the terminal or command line to start the crawler program:

scrapy crawl douban -o result.json

This The function of the instruction is to start the crawler named douban and output the crawled data to the result.json file in JSON format.

Through the above four steps, we can successfully crawl Douban books and their ratings and review information. Of course, if you need to further improve the efficiency and stability of the crawler program, you will also need to make some other optimizations and adjustments. For example: setting delay time, preventing anti-crawling mechanism, etc.

In short, using Scrapy to crawl Douban books and their ratings and review information is a relatively simple and interesting task. If you are interested in data crawling and Python programming, you can further try to crawl data from other websites to improve your programming skills.

The above is the detailed content of How to use Scrapy to crawl Douban books and their ratings and comments?. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
What is Python Switch Statement?What is Python Switch Statement?Apr 30, 2025 pm 02:08 PM

The article discusses Python's new "match" statement introduced in version 3.10, which serves as an equivalent to switch statements in other languages. It enhances code readability and offers performance benefits over traditional if-elif-el

What are Exception Groups in Python?What are Exception Groups in Python?Apr 30, 2025 pm 02:07 PM

Exception Groups in Python 3.11 allow handling multiple exceptions simultaneously, improving error management in concurrent scenarios and complex operations.

What are Function Annotations in Python?What are Function Annotations in Python?Apr 30, 2025 pm 02:06 PM

Function annotations in Python add metadata to functions for type checking, documentation, and IDE support. They enhance code readability, maintenance, and are crucial in API development, data science, and library creation.

What are unit tests in Python?What are unit tests in Python?Apr 30, 2025 pm 02:05 PM

The article discusses unit tests in Python, their benefits, and how to write them effectively. It highlights tools like unittest and pytest for testing.

What are Access Specifiers in Python?What are Access Specifiers in Python?Apr 30, 2025 pm 02:03 PM

Article discusses access specifiers in Python, which use naming conventions to indicate visibility of class members, rather than strict enforcement.

What is __init__() in Python and how does self play a role in it?What is __init__() in Python and how does self play a role in it?Apr 30, 2025 pm 02:02 PM

Article discusses Python's \_\_init\_\_() method and self's role in initializing object attributes. Other class methods and inheritance's impact on \_\_init\_\_() are also covered.

What is the difference between @classmethod, @staticmethod and instance methods in Python?What is the difference between @classmethod, @staticmethod and instance methods in Python?Apr 30, 2025 pm 02:01 PM

The article discusses the differences between @classmethod, @staticmethod, and instance methods in Python, detailing their properties, use cases, and benefits. It explains how to choose the right method type based on the required functionality and da

How do you append elements to a Python array?How do you append elements to a Python array?Apr 30, 2025 am 12:19 AM

InPython,youappendelementstoalistusingtheappend()method.1)Useappend()forsingleelements:my_list.append(4).2)Useextend()or =formultipleelements:my_list.extend(another_list)ormy_list =[4,5,6].3)Useinsert()forspecificpositions:my_list.insert(1,5).Beaware

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools