应用scikit learn做文本分类

本文详细介绍了使用Python进行文本挖掘及分类的过程,包括数据集加载、特征提取、应用多种分类算法(如朴素贝叶斯、KNN、SVM)及聚类分析。通过对比不同方法的准确率,为读者提供了深入理解文本挖掘技术的实践指南。

分享一下我老师大神的人工智能教程!零基础,通俗易懂!http://blog.youkuaiyun.com/jiangjunshow

也欢迎大家转载本篇文章。分享知识,造福人民,实现我们中华民族伟大复兴!

               

文本挖掘的paper没找到统一的benchmark,只好自己跑程序,走过路过的前辈如果知道20newsgroups或者其它好用的公共数据集的分类(最好要所有类分类结果,全部或取部分特征无所谓)麻烦留言告知下现在的benchmark,万谢!

嗯,说正文。20newsgroups官网上给出了3个数据集,这里我们用最原始的20news-19997.tar.gz


分为以下几个过程:

  • 加载数据集
  • 提feature
  • 分类
    • Naive Bayes
    • KNN
    • SVM
  • 聚类
说明: scipy官网 上有参考,但是看着有点乱,而且有bug。本文中我们分块来看。

Environment: Python 2.7 + Scipy (scikit-learn)

1.加载数据集
20news-19997.tar.gz下载数据集,解压到scikit_learn_data文件夹下,加载数据,详见code注释。
#first extract the 20 news_group dataset to /scikit_learn_datafrom sklearn.datasets import fetch_20newsgroups#all categories#newsgroup_train = fetch_20newsgroups(subset='train')#part categoriescategories = ['comp.graphics''comp.os.ms-windows.misc''comp.sys.ibm.pc.hardware''comp.sys.mac.hardware''comp.windows.x'];newsgroup_train = fetch_20newsgroups(subset = 'train',categories = categories);


可以检验是否load好了:
#print category namesfrom pprint import pprintpprint(list(newsgroup_train.target_names))

结果:
['comp.graphics',
 'comp.os.ms-windows.misc',
 'comp.sys.ibm.pc.hardware',
 'comp.sys.mac.hardware',
 'comp.windows.x']







2. 提feature:
刚才load进来的newsgroup_train就是一篇篇document,我们要从中提取feature,即词频啊神马的,用fit_transform

Method 1. HashingVectorizer,规定feature个数

#newsgroup_train.data is the original documents, but we need to extract the #feature vectors inorder to model the text datafrom sklearn.feature_extraction.text import HashingVectorizervectorizer = HashingVectorizer(stop_words = 'english',non_negative = True,                               n_features = 10000)fea_train = vectorizer.fit_transform(newsgroup_train.data)fea_test = vectorizer.fit_transform(newsgroups_test.data);#return feature vector 'fea_train' [n_samples,n_features]print 'Size of fea_train:' + repr(fea_train.shape)print 'Size of fea_train:' + repr(fea_test.shape)#11314 documents, 130107 vectors for all categoriesprint 'The average feature sparsity is {0:.3f}%'.format(fea_train.nnz/float(fea_train.shape[0]*fea_train.shape[1])*100);

结果:
Size of fea_train:(2936, 10000)
Size of fea_train:(1955, 10000)
The average feature sparsity is 1.002%
因为我们只取了10000个词,即10000维feature,稀疏度还不算低。而实际上用TfidfVectorizer统计可得到上万维的feature,我统计的全部样本是13w多维,就是一个相当稀疏的矩阵了。


**************************************************************************************************************************

上面代码注释说TF-IDF在train和test上提取的feature维度不同,那么怎么让它们相同呢?有两种方法:



Method 2. CountVectorizer+TfidfTransformer

让两个CountVectorizer共享vocabulary:
#----------------------------------------------------#method 1:CountVectorizer+TfidfTransformerprint '*************************\nCountVectorizer+TfidfTransformer\n*************************'from sklearn.feature_extraction.text import CountVectorizer,TfidfTransformercount_v1= CountVectorizer(stop_words = 'english', max_df = 0.5);counts_train = count_v1.fit_transform(newsgroup_train.data);print "the shape of train is "+repr(counts_train.shape)count_v2 = CountVectorizer(vocabulary=count_v1.vocabulary_);counts_test = count_v2.fit_transform(newsgroups_test.data);print "the shape of test is "+repr(counts_test.shape)tfidftransformer = TfidfTransformer();tfidf_train = tfidftransformer.fit(counts_train).transform(counts_train);tfidf_test = tfidftransformer.fit(counts_test).transform(counts_test);

结果:
*************************
CountVectorizer+TfidfTransformer
*************************
the shape of train is (2936, 66433)
the shape of test is (1955, 66433)





Method 3. TfidfVectorizer

让两个TfidfVectorizer共享vocabulary:
#method 2:TfidfVectorizerprint '*************************\nTfidfVectorizer\n*************************'from sklearn.feature_extraction.text import TfidfVectorizertv = TfidfVectorizer(sublinear_tf = True,                                    max_df = 0.5,                                    stop_words = 'english');tfidf_train_2 = tv.fit_transform(newsgroup_train.data);tv2 = TfidfVectorizer(vocabulary = tv.vocabulary_);tfidf_test_2 = tv2.fit_transform(newsgroups_test.data);print "the shape of train is "+repr(tfidf_train_2.shape)print "the shape of test is "+repr(tfidf_test_2.shape)analyze = tv.build_analyzer()tv.get_feature_names()#statistical features/terms


结果:
*************************
TfidfVectorizer
*************************
the shape of train is (2936, 66433)
the shape of test is (1955, 66433)

此外,还有sklearn里封装好的抓feature函数,fetch_20newsgroups_vectorized




Method 4. fetch_20newsgroups_vectorized

但是这种方法不能挑出几个类的feature,只能全部20个类的feature全部弄出来:

print '*************************\nfetch_20newsgroups_vectorized\n*************************'from sklearn.datasets import fetch_20newsgroups_vectorizedtfidf_train_3 = fetch_20newsgroups_vectorized(subset = 'train');tfidf_test_3 = fetch_20newsgroups_vectorized(subset = 'test');print "the shape of train is "+repr(tfidf_train_3.data.shape)print "the shape of test is "+repr(tfidf_test_3.data.shape)


结果:
*************************
fetch_20newsgroups_vectorized
*************************
the shape of train is (11314, 130107)
the shape of test is (7532, 130107)




3. 分类
3.1 Multinomial Naive Bayes Classifier
见代码&comment,不解释
#######################################################Multinomial Naive Bayes Classifierprint '*************************\nNaive Bayes\n*************************'from sklearn.naive_bayes import MultinomialNBfrom sklearn import metricsnewsgroups_test = fetch_20newsgroups(subset = 'test',                                     categories = categories);fea_test = vectorizer.fit_transform(newsgroups_test.data);#create the Multinomial Naive Bayesian Classifierclf = MultinomialNB(alpha = 0.01) clf.fit(fea_train,newsgroup_train.target);pred = clf.predict(fea_test);calculate_result(newsgroups_test.target,pred);#notice here we can see that f1_score is not equal to 2*precision*recall/(precision+recall)#because the m_precision and m_recall we get is averaged, however, metrics.f1_score() calculates#weithed average, i.e., takes into the number of each class into consideration.

注意我最后的3行注释,为什么f1≠2*(准确率*召回率)/(准确率+召回率)

其中,函数calculate_result计算f1:

def calculate_result(actual,pred):    m_precision = metrics.precision_score(actual,pred);    m_recall = metrics.recall_score(actual,pred);    print 'predict info:'    print 'precision:{0:.3f}'.format(m_precision)    print 'recall:{0:0.3f}'.format(m_recall);    print 'f1-score:{0:.3f}'.format(metrics.f1_score(actual,pred));    


3.2 KNN:

#######################################################KNN Classifierfrom sklearn.neighbors import KNeighborsClassifierprint '*************************\nKNN\n*************************'knnclf = KNeighborsClassifier()#default with k=5knnclf.fit(fea_train,newsgroup_train.target)pred = knnclf.predict(fea_test);calculate_result(newsgroups_test.target,pred);


3.3 SVM:

#######################################################SVM Classifierfrom sklearn.svm import SVCprint '*************************\nSVM\n*************************'svclf = SVC(kernel = 'linear')#default with 'rbf'svclf.fit(fea_train,newsgroup_train.target)pred = svclf.predict(fea_test);calculate_result(newsgroups_test.target,pred);


结果:

*************************

Naive Bayes
*************************
predict info:
precision:0.764
recall:0.759
f1-score:0.760
*************************
KNN
*************************
predict info:
precision:0.642
recall:0.635
f1-score:0.636
*************************
SVM
*************************
predict info:
precision:0.777
recall:0.774
f1-score:0.774



4. 聚类

#######################################################KMeans Clusterfrom sklearn.cluster import KMeansprint '*************************\nKMeans\n*************************'pred = KMeans(n_clusters=5)pred.fit(fea_test)calculate_result(newsgroups_test.target,pred.labels_);


结果:

*************************
KMeans
*************************
predict info:
precision:0.264
recall:0.226
f1-score:0.213



本文全部代码下载:在此


貌似准确率好低……那我们用全部特征吧……结果如下:

*************************
Naive Bayes
*************************
predict info:
precision:0.771
recall:0.770
f1-score:0.769
*************************
KNN
*************************
predict info:
precision:0.652
recall:0.645
f1-score:0.645
*************************
SVM
*************************
predict info:
precision:0.819
recall:0.816
f1-score:0.816
*************************
KMeans
*************************
predict info:
precision:0.289
recall:0.313
f1-score:0.266



关于Python更多的学习资料将继续更新,敬请关注本博客和新浪微博Rachel Zhang





           

给我老师的人工智能教程打call!http://blog.youkuaiyun.com/jiangjunshow
这里写图片描述
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值