Home  >  Article  >  Backend Development  >  Practical crawler combat in Python: Maoyan movie crawler

Practical crawler combat in Python: Maoyan movie crawler

WBOY
WBOYOriginal
2023-06-10 12:27:262827browse

With the rapid development of Internet technology, the amount of information on the Internet is becoming larger and larger. As the leading domestic film data platform, Maoyan Movies provides users with comprehensive film information services. This article will introduce how to use Python to write a simple Maoyan movie crawler to obtain movie-related data.

  1. Crawler overview

A crawler, or web crawler, is a program that automatically obtains Internet data. It can access target websites and obtain data through links on the Internet, realizing automated collection of information. Python is a powerful programming language that is widely used in data processing, web crawlers, visual charts, etc.

  1. Crawler implementation

The Maoyan movie crawler in this article will be implemented through Python’s requests and BeautifulSoup libraries. Requests is a Python HTTP library that can easily send web page requests, while BeautifulSoup is Python's HTML parsing library that can quickly parse HTML pages. Before starting, you need to install these two libraries.

2.1 Import library

Open the Python editor and create a new Python file. First you need to import the required libraries:

import requests
from bs4 import BeautifulSoup
import csv

2.2 Create a request link

Next, create a request link. Open the Maoyan Movie website, find the link to the target movie, and copy it. Here is the movie "Detective Chinatown 3" as an example:

url = 'https://maoyan.com/films/1250952'

2.3 Send a request

Create headers and set request header information. The header information generally includes User-Agent, Referer, Cookie and other information. Simulates the request method of an actual browser accessing a web page. Here we take the Chrome browser as an example. Then use the requests library to send a request and obtain the HTML code of the web page:

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0;Win64) AppleWebkit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'
}
response = requests.get(url,headers=headers)
html = response.text

2.4 Parse HTML code

Convert the obtained HTML code into a BeautifulSoup object, use the BeautifulSoup library to parse the HTML code and obtain the target data . Since the HTML code structure of the Maoyan movie website is relatively complex, it requires in-depth knowledge of HTML and BeautifulSoup.

soup = BeautifulSoup(html,'html.parser')
movie_title = soup.find('h1',class_='name').text
movie_info = soup.find_all('div',class_='movie-brief-container')[0]
movie_type = movie_info.find_all('li')[0].text 
movie_release_data = movie_info.find_all('li')[2].text 
movie_actors = movie_info.find_all('li')[1].text 
movie_score = soup.find('span',class_='score-num').text

2.5 Saving data

After processing the HTML page, you need to save the obtained data locally. Python's csv library is used here to store data. The csv library can convert data into CSV format to facilitate subsequent processing.

with open('movie.csv','w',newline='',encoding='utf-8-sig') as csvfile:
    writer = csv.writer(csvfile)
    writer.writerow(['电影名称',movie_title])
    writer.writerow(['电影类型',movie_type])
    writer.writerow(['上映日期',movie_release_data])
    writer.writerow(['演员阵容',movie_actors])
    writer.writerow(['豆瓣评分',movie_score])

The entire code is as follows:

import requests
from bs4 import BeautifulSoup
import csv

url = 'https://maoyan.com/films/1250952'
headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0;Win64) AppleWebkit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'
}
response = requests.get(url,headers=headers)
html = response.text
soup = BeautifulSoup(html,'html.parser')
movie_title = soup.find('h1',class_='name').text
movie_info = soup.find_all('div',class_='movie-brief-container')[0]
movie_type = movie_info.find_all('li')[0].text 
movie_release_data = movie_info.find_all('li')[2].text 
movie_actors = movie_info.find_all('li')[1].text 
movie_score = soup.find('span',class_='score-num').text 
with open('movie.csv','w',newline='',encoding='utf-8-sig') as csvfile:
    writer = csv.writer(csvfile)
    writer.writerow(['电影名称',movie_title])
    writer.writerow(['电影类型',movie_type])
    writer.writerow(['上映日期',movie_release_data])
    writer.writerow(['演员阵容',movie_actors])
    writer.writerow(['豆瓣评分',movie_score])
  1. Summary

This article introduces how to use Python’s requests and BeautifulSoup library to implement the Maoyan movie crawler. By sending network requests, parsing HTML code, saving data and other steps, we can easily obtain the target movie-related data and store it locally. Web crawler technology has extensive application value in data collection, data mining, etc. We can improve our technical level through continuous learning and continue to explore in practice.

The above is the detailed content of Practical crawler combat in Python: Maoyan movie crawler. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn