search
HomeBackend DevelopmentPython TutorialHow to use Scrapy to crawl Kugou Music songs?

How to use Scrapy to crawl Kugou Music songs?

Jun 22, 2023 pm 10:59 PM
Crawlingscrapykugou music

With the development of the Internet, the amount of information on the Internet is increasing, and people need to crawl information on different websites to perform various analyzes and mining. Scrapy is a fully functional Python crawler framework that can automatically crawl website data and output it in a structured form. Kugou Music is one of the most popular online music platforms. Below I will introduce how to use Scrapy to crawl the song information of Kugou Music.

1. Install Scrapy

Scrapy is a framework based on the Python language, so you need to configure the Python environment first. Before installing Scrapy, you need to install Python and pip tools first. After the installation is complete, you can install Scrapy through the following command:

pip install scrapy

2. Create a new Scrapy project

Scrapy provides a set of command line tools to facilitate us to create new s project. Enter the following code in the command line:

scrapy startproject kuwo_music

After execution, a Scrapy project named "kuwo_music" will be created in the current directory. In this project, we need to create a new crawler to crawl the song information of Kugou Music.

3. Create a new crawler

In the Scrapy project, a crawler is a program used to crawl and parse data on a specific website. In the "kuwo_music" project directory, execute the following command:

scrapy genspider kuwo www.kuwo.cn 

The above command will create a file named "kuwo.py" in the "kuwo_music/spiders" directory, which is our crawler program code. We need to define the crawling and parsing process of website data in this file.

4. Website request and page parsing

In the newly created "kuwo.py" file, you first need to import the necessary modules:

import scrapy
from kuwo_music.items import KuwoMusicItem
from scrapy_redis.spiders import RedisSpider
from scrapy_redis import get_redis_from_settings
from scrapy.utils.project import get_project_settings

Through the above code, we can use various tool classes and methods provided by the Scrapy framework, as well as custom modules in the project. Before continuing to write the crawler code, we need to first analyze the web page where the Kugou Music song information is located.

Open the browser, visit www.kuwo.cn, enter the song name in the search bar and search, you will find that the web page jumps to the search results page. In the search results page, you can see relevant information about each song, such as song name, artist, playing time, etc. We need to send a request through Scrapy and parse the search results page to get the detailed information of each song.

In the crawler code, we need to implement the following two methods:

def start_requests(self):
    ...
    
def parse(self, response):
    ...

Among them, the start_requests() method is used to send the initial web page request, and the parsing method parse() is designated as the callback function; and the parse() method is used to parse web pages, extract data, and process responses. The specific code is as follows:

class KuwoSpider(RedisSpider):
    name = 'kuwo'
    allowed_domains = ['kuwo.cn']
    redis_cli = get_redis_from_settings(get_project_settings())

    def start_requests(self):
        keywords = ['爱情', '妳太善良', '说散就散']
        # 搜索结果页面的url
        for keyword in keywords:
            url = f'http://www.kuwo.cn/search/list?key={keyword}&rformat=json&ft=music&encoding=utf8&rn=8&pn=1'
            yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response):
        data = json.loads(response.text)
        # 获取搜索结果页面的每个歌曲信息
        song_list = data['data']['list']
        for song in song_list:
            music_id = song['musicrid'][6:]
            song_name = song['name']
            singer_name = song['artist']
            album_name = song['album']

            # 根据歌曲id获取歌曲详细信息
            url = f'http://www.kuwo.cn/url?format=mp3&rid=MUSIC_{music_id}&response=url&type=convert_url3&br=128kmp3&from=web&t=1639056420390&httpsStatus=1&reqId=6be77da1-4325-11ec-b08e-11263642326e'
            meta = {'song_name': song_name, 'singer_name': singer_name, 'album_name': album_name}
            yield scrapy.Request(url=url, callback=self.parse_song, meta=meta)

    def parse_song(self, response):
        item = KuwoMusicItem()
        item['song_name'] = response.meta.get('song_name')
        item['singer_name'] = response.meta.get('singer_name')
        item['album_name'] = response.meta.get('album_name')
        item['song_url'] = response.text.strip()
        yield item

In the above code, we first define the song keywords to be searched in the start_requests() method, construct the URL of each song search result page, and send the request. In the parse() method, we parse the search results page and extract relevant information about each song, including song name, artist, album, etc. Then, based on the id of each song, we construct a URL to obtain the corresponding song information, and use Scrapy's metadata (meta) mechanism to transfer the song name, singer, album and other information. Finally, we parse the song information page and extract the song playback address in the parse_song() method, and output it to the custom KuwoMusicItem object.

5. Data storage and use

In the above code, we define a custom KuwoMusicItem object to store the crawled song information. We can use the tool class RedisPipeline to store the crawled data into the Redis database:

ITEM_PIPELINES = {
    'kuwo_music.pipelines.RedisPipeline': 300,
}

At the same time, we can also use the tool class JsonLinesItemExporter to store the data in a local csv file:

from scrapy.exporters import JsonLinesItemExporter
import csv

class CsvPipeline(object):
    # 将数据存储到csv文件
    def __init__(self):
        self.file = open('kuwo_music.csv', 'w', encoding='utf-8', newline='')
        self.exporter = csv.writer(self.file)
        self.exporter.writerow(['song_name', 'singer_name', 'album_name', 'song_url'])

    def close_spider(self, spider):
        self.file.close()

    def process_item(self, item, spider):
        self.exporter.writerow([item['song_name'], item['singer_name'], item['album_name'], item['song_url']])
        return item

Finally, execute the following command on the command line to start the Scrapy crawler:

scrapy crawl kuwo

The above is a detailed introduction on how to use the Scrapy framework to crawl the song information of Kugou Music. I hope it can be provided for everyone. Some reference and help.

The above is the detailed content of How to use Scrapy to crawl Kugou Music songs?. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Python vs. C  : Understanding the Key DifferencesPython vs. C : Understanding the Key DifferencesApr 21, 2025 am 12:18 AM

Python and C each have their own advantages, and the choice should be based on project requirements. 1) Python is suitable for rapid development and data processing due to its concise syntax and dynamic typing. 2)C is suitable for high performance and system programming due to its static typing and manual memory management.

Python vs. C  : Which Language to Choose for Your Project?Python vs. C : Which Language to Choose for Your Project?Apr 21, 2025 am 12:17 AM

Choosing Python or C depends on project requirements: 1) If you need rapid development, data processing and prototype design, choose Python; 2) If you need high performance, low latency and close hardware control, choose C.

Reaching Your Python Goals: The Power of 2 Hours DailyReaching Your Python Goals: The Power of 2 Hours DailyApr 20, 2025 am 12:21 AM

By investing 2 hours of Python learning every day, you can effectively improve your programming skills. 1. Learn new knowledge: read documents or watch tutorials. 2. Practice: Write code and complete exercises. 3. Review: Consolidate the content you have learned. 4. Project practice: Apply what you have learned in actual projects. Such a structured learning plan can help you systematically master Python and achieve career goals.

Maximizing 2 Hours: Effective Python Learning StrategiesMaximizing 2 Hours: Effective Python Learning StrategiesApr 20, 2025 am 12:20 AM

Methods to learn Python efficiently within two hours include: 1. Review the basic knowledge and ensure that you are familiar with Python installation and basic syntax; 2. Understand the core concepts of Python, such as variables, lists, functions, etc.; 3. Master basic and advanced usage by using examples; 4. Learn common errors and debugging techniques; 5. Apply performance optimization and best practices, such as using list comprehensions and following the PEP8 style guide.

Choosing Between Python and C  : The Right Language for YouChoosing Between Python and C : The Right Language for YouApr 20, 2025 am 12:20 AM

Python is suitable for beginners and data science, and C is suitable for system programming and game development. 1. Python is simple and easy to use, suitable for data science and web development. 2.C provides high performance and control, suitable for game development and system programming. The choice should be based on project needs and personal interests.

Python vs. C  : A Comparative Analysis of Programming LanguagesPython vs. C : A Comparative Analysis of Programming LanguagesApr 20, 2025 am 12:14 AM

Python is more suitable for data science and rapid development, while C is more suitable for high performance and system programming. 1. Python syntax is concise and easy to learn, suitable for data processing and scientific computing. 2.C has complex syntax but excellent performance and is often used in game development and system programming.

2 Hours a Day: The Potential of Python Learning2 Hours a Day: The Potential of Python LearningApr 20, 2025 am 12:14 AM

It is feasible to invest two hours a day to learn Python. 1. Learn new knowledge: Learn new concepts in one hour, such as lists and dictionaries. 2. Practice and exercises: Use one hour to perform programming exercises, such as writing small programs. Through reasonable planning and perseverance, you can master the core concepts of Python in a short time.

Python vs. C  : Learning Curves and Ease of UsePython vs. C : Learning Curves and Ease of UseApr 19, 2025 am 12:20 AM

Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software