Home > Article > Backend Development > How to use CountVectorizer in Python's sklearn?
CountVectorizer official document.
Vectorize a document collection into a count matrix.
If you do not provide an a priori dictionary and do not use the analyzer to do some kind of feature selection, then the number of features will be equal to the vocabulary discovered by analyzing the data.
Two methods: 1. You can put it directly into the model without word segmentation; 2. You can segment the Chinese text first.
The vocabulary produced by the two methods will be very different. Specific demonstrations will be given later.
import jieba import re from sklearn.feature_extraction.text import CountVectorizer #原始数据 text = ['很少在公众场合手机外放', '大部分人都还是很认真去学习的', '他们会用行动来', '无论你现在有多颓废,振作起来', '只需要一点点地改变', '你的外在和内在都能焕然一新'] #提取中文 text = [' '.join(re.findall('[\u4e00-\u9fa5]+',tt,re.S)) for tt in text] #分词 text = [' '.join(jieba.lcut(tt)) for tt in text] text
#构建模型 vectorizer = CountVectorizer() #训练模型 X = vectorizer.fit_transform(text)
#所有文档汇集后生成的词汇 feature_names = vectorizer.get_feature_names() print(feature_names)
No word segmentation Generated vocabulary
Generated vocabulary after word segmentation
#每个文档相对词汇量出现次数形成的矩阵 matrix = X.toarray() print(matrix)
#计数矩阵转化为DataFrame df = pd.DataFrame(matrix, columns=feature_names) df
print(vectorizer.vocabulary_)
The above is the detailed content of How to use CountVectorizer in Python's sklearn?. For more information, please follow other related articles on the PHP Chinese website!