Home  >  Article  >  Backend Development  >  An introduction to how to use the naive Bayes algorithm in python

An introduction to how to use the naive Bayes algorithm in python

高洛峰
高洛峰Original
2017-03-21 09:09:391525browse

Here to repeat why the title is "use" rather than "implementation":

. First of all, the algorithm provided by professionals is higher than that of the algorithm we wrote in terms of efficiency and accuracy.

                Secondly, for people who are not good at mathematics, it is very painful to study a bunch of formulas in order to implement the algorithm.

Again, unless the algorithm provided by others cannot meet their needs, there is no need to "repeat the wheel".

The following words are home. If you do n’t know the Bayesian algorithm, you can check the relevant information. Here is just a brief introduction:

1. Bayesian formula:

P ( A|B)=P(AB)/P(B)

2. Bayesian inference:

P(A|B)=P(A)×P(B|A )/P(B)

                                                                                                                          followed                                 posed in words:                     ,             to                         pati had been given ’’’ ’ s’''’’ ’ ’’’’ ’ down under--- pi for a t-a-a-a-match with, and s The problem that the Sri Lankan algorithm needs to solve is how to find the similarity, that is: the value of P(B|A)

3. Three commonly used naive Bayes algorithms are provided in the scikit-learn package, as follows Explanation in order: 1) Gaussian Naive Bayes: Assume that

attributes

/features are subject to normal distribution (as shown below), and are mainly used for numerical features.

# Use the data that comes with the Scikit-Learn package, the code and instructions are as follows:

>>>from sklearn import datasets   ##导入包中的数据
>>> iris=datasets.load_iris()     ##加载数据
>>> iris.feature_names            ##显示特征名字
    ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
>>> iris.data                     ##显示数据
    array([[ 5.1, 3.5, 1.4, 0.2],[ 4.9, 3. , 1.4, 0.2],[ 4.7, 3.2, 1.3, 0.2]............
>>> iris.data.size                ##数据大小 ---600个
>>> iris.target_names             ##显示分类的名字 
    array([&#39;setosa&#39;, &#39;versicolor&#39;, &#39;virginica&#39;], dtype=&#39;<U10&#39;)
>>> from sklearn.naive_bayes import GaussianNB  ##导入高斯朴素贝叶斯算法
>>> clf = GaussianNB()                          ##给算法赋一个变量,主要是为了方便使用
>>> clf.fit(iris.data, iris.target)             ##开始分类。对于量特别大的样本,可以使用函数partial_fit分类,避免一次加载过多数据到内存

>>> clf.predict(iris.data[0].reshape(1,-1))       ##验证分类。标红部分特别说明:因为predict的参数是数组,data[0]是列表,所以需要转换一下
array([0])
>>> data=np.array([6,4,6,2])                      ##验证分类
>>> clf.predict(data.reshape(1,-1))
array([2])

This is involved in a question: How to judge the data meets the normal situation distributed? There are related function judgments in the R language, or you can see it by directly

drawingAn introduction to how to use the naive Bayes algorithm in python, but it is all a situation where P(x, y) can be directly

drawed in the coordinate system. How to determine the data in the example is not yet clear. This part will be added later.

        2) Multinomial distribution Naive Bayes: often used for text classification, the feature is the word, and the value is the number of times the word appears.

##示例来在官方文档,详细说明见第一个例子
>>> import numpy as np
>>> X = np.random.randint(5, size=(6, 100))    ##返回随机整数值:范围[0,5) 大小6*100 6行100列
>>> y = np.array([1, 2, 3, 4, 5, 6])
>>> from sklearn.naive_bayes import MultinomialNB
>>> clf = MultinomialNB()
>>> clf.fit(X, y)
MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)  
>>> print(clf.predict(X[2]))
[3]
        3) Bernoulli Naive Bayes: Each feature is of Boolean type, and the result is 0 or 1, that is, it does not appear

##示例来在官方文档,详细说明见第一个例子
>>> import numpy as np
>>> X = np.random.randint(2, size=(6, 100))
>>> Y = np.array([1, 2, 3, 4, 4, 5])
>>> from sklearn.naive_bayes import BernoulliNB
>>> clf = BernoulliNB()
>>> clf.fit(X, Y)
BernoulliNB(alpha=1.0, binarize=0.0, class_prior=None, fit_prior=True)  
>>> print(clf.predict(X[2]))
[3]

The above is the detailed content of An introduction to how to use the naive Bayes algorithm in python. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn