Home >Technology peripherals >AI >How to use machine learning to analyze sentiment
We used different machine learning algorithms for sentiment analysis, and then compared the accuracy results of each algorithm to determine which algorithm is most suitable for this problem.
Sentiment analysis is an important content in natural language processing (NLP). Emotions are the feelings we have about an event, object, situation, or thing. Sentiment analysis is a research field that automatically extracts human emotions from text. It slowly started to develop in the early 90s.
This article will let you understand how to use machine learning (ML) for sentiment analysis and compare the results of different machine learning algorithms. The goal of this article is not to study how to improve algorithm performance.
Nowadays, we live in a fast-paced society, all goods can be purchased online, and everyone can post their own comments online. And negative online reviews of some products may damage the company's reputation, thereby affecting the company's sales. So it becomes very important for companies to use product reviews to understand what customers really want. However, there is too much comment data, and it is impossible to manually view all comments one by one. This is how sentiment analysis was born.
Now, let’s see how to use machine learning to develop a model to perform basic sentiment analysis.
The first step is to select a data set. You can choose from any public review, such as a tweet or a movie review. The dataset must contain at least two columns: labels and actual text segments.
The figure below shows some of the data sets we selected.
Figure 1: Data sample
Next, we import the required libraries:
import pandas as pd import numpy as np from nltk.stem.porter import PorterStemmer import re import string
As As you can see in the above code, we imported the NumPy
and Pandas
libraries to process the data. As for other libraries, we will explain them when they are used.
The data set is ready and the required libraries have been imported. Next, we need to use the Pandas
library to read the data set into our project. We use the following code to read the data set into the Pandas data frame DataFrame
sentiment_dataframe = pd.read_csv(“/content/drive/MyDrive/Data/sentiments - sentiments.tsv”,sep = ‘t’)
Now the data set has been imported into our project. We then process the data so that the algorithm can better understand the characteristics of the data set. We first name the columns in the dataset, which is done with the following code:
sentiment_dataframe.columns = [“label”,”body_text”]
Then, we numericize the label
column: negative
comments are replaced with 1, and positive
comments are replaced with 0. The image below shows the values of sentiment_dataframe
after a basic modification.
Figure 2: Data frame with basic modifications
Next step It is the preprocessing of data. This is a very important step, because machine learning algorithms can only understand/process numerical data, but not text. Therefore, feature extraction is required at this time to convert strings/text into numerical data. Additionally, redundant and useless data need to be removed as these data may contaminate our trained model. We remove noisy data, missing value data, and inconsistent data in this step.
对于情感分析,我们在数据帧中添加特征文本的长度和标点符号计数。我们还要进行词干提取,即将所有相似词(如 “give”、“giving” 等)转换为单一形式。完成后,我们将数据集分为两部分:特征值 X 和 目标值 Y。
上述内容是使用以下代码完成的。下图显示了执行这些步骤后的数据帧。
Figure 3: Data frame after the division of the data set
def count_punct(text): count = sum([1 for char in text if char in string.punctuation]) return round(count/(len(text) - text.count(“ “)),3)*100 tokenized_tweet = sentiment_dataframe[‘body_text’].apply(lambda x: x.split()) stemmer = PorterStemmer() tokenized_tweet = tokenized_tweet.apply(lambda x: [stemmer.stem(i) for i in x]) for i in range(len(tokenized_tweet)): tokenized_tweet[i] = ‘ ‘.join(tokenized_tweet[i]) sentiment_dataframe[‘body_text’] = tokenized_tweet sentiment_dataframe[‘body_len’] = sentiment_dataframe[‘body_text’].apply(lambda x:len(x) - x.count(“ “)) sentiment_dataframe[‘punct%’] = sentiment_dataframe[‘body_text’].apply(lambda x:count_punct(x)) X = sentiment_dataframe[‘body_text’] y = sentiment_dataframe[‘label’]
我们接下来进行文本特征抽取,对文本特征进行数值化。为此,我们使用计数向量器CountVectorizer,它返回词频矩阵。
在此之后,计算数据帧 X 中的文本长度和标点符号计数等特征。X 的示例如下图所示。
Figure 4: Sample of final features
现在数据已经可以训练了。下一步是确定使用哪些算法来训练模型。如前所述,我们将尝试多种机器学习算法,并确定最适合情感分析的算法。由于我们打算对文本进行二元分类,因此我们使用以下算法:
首先,将数据集划分为训练集和测试集。使用 sklearn
库,详见以下代码:
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.20, random_state = 99)
我们使用 20% 的数据进行测试,80% 的数据用于训练。划分数据的意义在于对一组新数据(即测试集)评估我们训练的模型是否有效。
现在,让我们开始训练第一个模型。首先,我们使用 KNN 算法。先训练模型,然后再评估模型的准确率(具体的代码都可以使用 Python 的 sklearn
库来完成)。详见以下代码,KNN 训练模型的准确率大约为 50%。
from sklearn.neighbors import KNeighborsClassifier model = KNeighborsClassifier(n_neighbors=3) model.fit(X_train, y_train) model.score (X_test,y_test) 0.5056689342403629
逻辑回归模型的代码十分类似——首先从库中导入函数,拟合模型,然后对模型进行评估。下面的代码使用逻辑回归算法,准确率大约为 66%。
from sklearn.linear_model import LogisticRegression model = LogisticRegression() model.fit (X_train,y_train) model.score (X_test,y_test) 0.6621315192743764
以下代码使用 SVM,准确率大约为 67%。
from sklearn import svm model = svm.SVC(kernel=’linear’) model.fit(X_train, y_train) model.score(X_test,y_test) 0.6780045351473923
以下的代码使用了随机森林算法,随机森林训练模型的准确率大约为 69%。
from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier() model.fit(X_train, y_train) model.score(X_test,y_test) 0.6938775510204082
接下来,我们使用决策树算法,其准确率约为 61%。
from sklearn.tree import DecisionTreeClassifier model = DecisionTreeClassifier() model = model.fit(X_train,y_train) model.score(X_test,y_test) 0.6190476190476191
以下的代码使用随机梯度下降算法,其准确率大约为 49%。
from sklearn.linear_model import SGDClassifier model = SGDClassifier() model = model.fit(X_train,y_train) model.score(X_test,y_test) 0.49206349206349204
以下的代码使用朴素贝叶斯算法,朴素贝叶斯训练模型的准确率大约为 60%。
from sklearn.naive_bayes import GaussianNB model = GaussianNB() model.fit(X_train, y_train) model.score(X_test,y_test) 0.6009070294784581
接下来,我们绘制所有算法的准确率图。如下图所示。
Figure 5: Accuracy performance of the different algorithms
可以看到,对于情感分析这一问题,随机森林算法有最佳的准确率。由此,我们可以得出结论,随机森林算法是所有机器算法中最适合情感分析的算法。我们可以通过处理得到更好的特征、尝试其他矢量化技术、或者使用更好的数据集或更好的分类算法,来进一步提高准确率。
既然,随机森林算法是解决情感分析问题的最佳算法,我将向你展示一个预处理数据的样本。在下图中,你可以看到模型会做出正确的预测!试试这个来改进你的项目吧!
Figure 6: Sample predictions made
The above is the detailed content of How to use machine learning to analyze sentiment. For more information, please follow other related articles on the PHP Chinese website!