Home >System Tutorial >LINUX >Python crawler analyzes 'Wolf Warrior' movie review
Introduction | As of August 20, the 25th day after "Wolf Warrior II" was released, its box office has exceeded 5 billion yuan, truly becoming the only Asian film to enter the top 100 box office in world film history. This article uses Python crawlers to obtain data, analyze Douban movie reviews, and create a cloud image of Douban movie reviews. Now, let’s take a look at what interesting subtexts are hidden in the reviews of “Wolf Warrior II”. |
Regardless of the explosive box office, the movie also aroused various emotions in the audience. Some people even said harshly: Anyone who dares to criticize "Wolf Warrior II" is either mentally retarded or a public enemy. It's as simple and crude as that.
Everyone has mixed reviews of "Wolf Warrior II" and have left comments on Douban to express their opinions on the movie. Although various comments were published and the media made a fuss, the audience still couldn't tell which opinion was more reliable.
So far, there have been more than 150,000 comments. When you read the comments, you may see most of them for a period of time, either praising or disparaging comments. So it’s hard to tell by browsing the comments what everyone’s overall opinion of this movie is. Now let’s use data analysis to see what interesting things happened in these comments!
This article obtains data through Python crawler, analyzes Douban movie reviews, and creates a cloud image of Douban movie reviews. Now, let’s take a look at what interesting subtexts are hidden in the reviews of “Wolf Warrior II”.
Data acquisitionThis article uses the data obtained by Python crawler. It mainly uses the requests package and the regular package re. This program does not process the verification code. I have crawled Douban's webpage before. At that time, because the crawled content was small, I did not encounter the verification code. When I wrote this crawler, I thought there would be no verification code, but when about 15,000 comments were crawled, the verification code popped up.
Then I thought, isn’t it just 120,000? At most, I only entered the verification code about a dozen times, so I didn’t have to deal with the verification code. But what happened next was a bit confusing for me. When I crawled about 15,000 comments and entered the verification code, I thought it would crawl to about 30,000, but after crawling about 3,000, it didn’t work. I still had to enter the verification code. .
Then it has been like this, stumbling, sometimes crawling for a long time before requiring a verification code, sometimes not. But in the end, the comments were crawled. The content crawled is mainly: user name, whether you have seen it, the number of stars of the comment, the time of the comment, the number of people who found it useful, and the content of the comment. The following is the code of the Python crawler:
import requests<br>
import re<br>
import pandas as pd<br>
url_first='https://movie.douban.com/subject/26363254/comments?start=0'<br>
head={'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/59.0.3071.109 Chrome/59.0.3071.109 Safari/537.36'}<br>
html=requests.get(url_first,headers=head,cookies=cookies)<br>
cookies={'cookie':'your own cookie'} #That is, find the cookie corresponding to your account<br>
reg=re.compile(r'') #Next page<br>
ren=re.compile(r'<span>(.*?)</span>.*?comment">(.*?).*?.*?<span .>(.*?).*?<span> (.*?)</span>.*?title="(.*?)"></span>.*?title="(.*?)">.*?class=""> (.*?) \n',re.S) #Comments and other content<br>
while html.status_code==200:<br>
url_next='https://movie.douban.com/subject/26363254/comments' re.findall(reg,html.text)[0]<br>
zhanlang=re.findall(ren,html.text)<br>
data=pd.DataFrame(zhanlang)<br>
data.to_csv('/home/wajuejiprince/document/zhanlang/zhanlangpinglun.csv', header=False,index=False,mode='a ') #Write a csv file, 'a ' is the append mode<br>
data=[]<br>
zhanlang=[]<br>
html=requests.get(url_next,cookies=cookies,headers=head)
In the above code, please set your own User-Agent, Cookie, CSV saving path, etc., and save the crawled content into a CSV format file.
This article uses R language to process data. Although we have paid great attention to the structure of the crawled content when crawling, it is inevitable that there are some values that are not what we want. For example, some comment content will appear in the commenter item, so it is still necessary to clean the data.
First load all the packages to be used:
library(data.table)<br>
library(plotly)<br>
library(stringr)<br>
library(jiebaR)<br>
library(wordcloud2)<br>
library(magrittr)
Import data and clean it:
dt
Let’s first take a look at the situation of comments based on the number of stars:
plot_ly(my_dt[,.(.N),by=.(five-star number)],type = 'bar',x=~five-star number,y=~N)
The number of five-pointed stars corresponds to 5 levels, 5 stars means highly recommended, 4 stars means recommended, 3 stars means okay, 2 stars means poor, and 1 star means very poor.
It is obvious from the reviews of Pentagram that we have reason to believe that the vast majority of viewers will be satisfied with this film.
First we should segment the comments:
wk <br>
Overall comment cloud display: <br>
<code>words�ta.table()<br>
setnames(words,"N","pinshu")<br>
words[pinshu>1000] #Remove lower frequency words (less than 1000)<br>
wordcloud2(words[pinshu>1000], size = 2, fontFamily = "Microsoft Yahei", color = "random-light", backgroundColor = "grey")
Because there was too much data, my broken computer froze, so I removed words with frequencies lower than 1,000 when making the cloud chart. The cloud chart results are as follows:
Overall, everyone’s comments on this movie are pretty good! Topics such as plot, action, and patriotism are the focus of discussion.
Evaluation keywords: Wu Jing, personal heroism, main theme, China, protagonist aura, Secretary Dakang, very burning.
It can be seen that "Ran" was not the most popular response after watching it. The audience was more interested in admiring Wu Jing himself and commenting on patriotism and individualism.
Cloud image display of different comment levelsBut what would it look like if the comments of people with different ratings were displayed separately? That is to create a cloud chart for the review content of five levels (strongly recommended, recommended, okay, poor, very poor), the code is as follows (just change the code to "strongly recommended" to other).
1. Comment cloud of highly recommended reviewers 2. Comment cloud of recommended reviewers 3. Review cloud of good reviewers 4. Review cloud of poor reviewers 5. Review cloud of bad reviewers in conclusionJudging from the word segmentation results of different comments, they all have a common topic: patriotism.
The number of patriotic topics in highly recommended comments may be higher than in poorly recommended comments. In highly recommended comments, people are more willing to discuss things other than patriotic topics. Most of the negative comments were about patriotic topics. And their proportion is very interesting. From those who highly recommend it to those who comment poorly, the proportion of patriotic topics gradually increases.
We cannot subjectively think who is right and who is wrong. We can only say that they stand from different perspectives, so the results they see are also different. When we disagree with others, it is often from different perspectives. People with bad comments may be thinking more about patriotic topics (this is just a discussion of patriotic topics, not who loves or dislikes the country)! !
After the analysis, the fundamental reason why this "Wolf Warrior 2" has been supported by so many people is that it has achieved an American blockbuster-level scene in production that "Wolf Warrior 1" did not have, and at the same time, it also embodies patriotism. It aroused resonance and aroused people's hearts.
The above is the detailed content of Python crawler analyzes 'Wolf Warrior' movie review. For more information, please follow other related articles on the PHP Chinese website!