In order to do machine learning on text, we will need to convert the text to numerical feature vectors.The bag of words representation : the text is converted to numerical vectors and the column names are the underlying words and values can be either of thw following points:
- Binary, which indicates whether the word is present/absent in the given document
- Frequency, which indicates the count of the word in the given document
- TFIDF, which is a score that we will cover subsequently
representation. It is a two-step process, as follows:
1. For every word in the document that is present in the training set, we will assign an integer and store this as a dictionary.
2. For every document, we will create a vector. The columns of the vectors are the actual words itself. They form the features. The values of the cell are binary, frequency, or TFIDF.
Tip
Depending on your application, the notion of a document can change. In this case, our sentence is considered as a document. In some cases, we can also treat a paragraph as a document. In web page mining, a single web page can be treated as a document or parts of the web page separated by the <p> tags can also be treated as a document. In our case, we have 5 sentences, that's documents.Example
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
@author: snaildove
"""
# Load Libraries
from nltk.tokenize import sent_tokenize
from sklearn.feature_extraction.text import CountVectorizer
from nltk.corpus import stopwords
# 1. Our input text, we use the same input which we had used in stop word removal recipe.
text = "Text mining, also referred to as text data mining, roughly equivalent to text analytics, refers to the process of deriving high-quality information from text. Highquality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interestingness. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities).Text analysis involves information retrieval, lexical analysis to study word frequency distributions, pattern recognition, tagging/annotation, information extraction, data mining techniques including link and association analysis, visualization, and predictive analytics. The overarching goal is, essentially, to turn text into data for analysis, via application of natural language processing (NLP) and analytical methods.A typical application is to scan a set of documents written in a natural language and either model the document set for predictive classification purposes or populate a database or search index with the information extracted."
#Let’s jump into how to transform the text into a bag of words representation:
#2.Let us divide the given text into sentences
sentences = sent_tokenize(text);
print len(sentences);
#3.Let us write the code to generate feature vectors.
count_v = CountVectorizer();
tdm = count_v.fit_transform(sentences);
print "num of features/vocabulary :"
print len(count_v.vocabulary_);
print "vocabulary: ";
print count_v.vocabulary_;
print "tdm : ";
print tdm;
print "type of tdm: ";
print type(tdm);
print "params of CountVectorizer class: ";
print count_v._get_param_names();
output :
6
num of features/vocabulary :
123
vocabulary:
{u'nlp': 66, u'named': 64, u'concept': 16, u'interpretation': 50, u'features': 33, u'classification': 13,
u'text': 108, u'into': 51, u'within': 120, u'entity': 27, u'structuring': 99, u'via': 117, u'through':
110, u'statistical': 97, u'such': 102, u'quality': 82, u'linguistic': 57, u'clustering': 14, u'visualization':
118, u'categorization': 12, u'from': 37, u'to': 111, u'addition': 0, u'structured': 98, u'relations': 87,
.............................................................................................................
, u'usually': 116, u'model': 62, u'typically': 115, u'or': 69, u'relation': 86, u'typical': 114}
tdm :
(0, 37) 1
(0, 46) 1
: :
(0, 108) 4
(1, 55) 1
: :
(5, 111) 2
(5, 108) 1
type of tdm:
<class 'scipy.sparse.csr.csr_matrix'>
params of CountVectorizer class:
['analyzer', 'binary', 'decode_error', 'dtype', 'encoding', 'input', 'lowercase',
'max_df', 'max_features', 'min_df', 'ngram_range', 'preprocessor', 'stop_words'
, 'strip_accents', 'token_pattern', 'tokenizer', 'vocabulary']
The vocabulary_attribute of CountVectorizer object is a map of the terms in order to feature indices. We can also use the following function to get the list of words (features):
count_v.get_feature_names()
output :
[u'addition', u'along', u'also', u'analysis', u'analytical', u'analytics', u'and',
u'annotation', u'application', u'as', u'association', u'between', u'categorization',
u'classification', u'clustering', u'combination', u'concept',u'data', u'database',
u'derived', u'deriving', u'devising', u'distributions', u'document', u'documents',
u'either', u'entities', u'entity', u'equivalent', u'essentially', u'evaluation', u'extracted'
, u'extraction', u'features', u'finally', u'for', u'frequency', u'from', u'goal', u'granular',
u'high', u'highquality', u'in', u'include', u'including', u'index', u'information', u'input',
u'insertion', u'interestingness', u'interpretation', u'into', u'involves', u'is', u'language',
u'learning', u'lexical', u'linguistic', u'link', u'means', u'methods', u'mining', u'model',
u'modeling', u'named', u'natural', u'nlp', u'novelty', u'of', u'or', u'others', u'output',
u'overarching', u'parsing', u'pattern', u'patterns', u'populate', u'predictive', u'process',
u'processing', u'production',u'purposes', u'quality', u'recognition', u'referred', u'refers',
u'relation', u'relations', u'relevance', u'removal', u'retrieval', u'roughly', u'scan', u'search',
u'sentiment', u'set', u'some', u'statistical', u'structured', u'structuring', u'study', u'subsequent',
u'such', u'summarization', u'tagging', u'tasks', u'taxonomies', u'techniques', u'text', u'the',
u'through', u'to', u'trends', u'turn', u'typical', u'typically', u'usually', u'via', u'visualization',
u'with', u'within', u'word', u'written']
The type of tdm is <class 'scipy.sparse.csr.csr_matrix'> refer to : https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html csr 全称 Compressed Sparse Row matrix 压缩的稀疏行矩阵
sklearn.feature_extraction.text.CountVectorizer
The CountVectorizer class has a lot of other features/parameters to offer in order to transform the text into feature vectors. Let’s look at some of them:
- binary : boolean, default=False
- lowercase : boolean, True by default
- stop_words : string {‘english’}, list, or None (default)
If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens. Only applies if analyzer == 'word'
.
- ngram_range : tuple (min_n, max_n)
More adbout sklearn.feature_extraction.text.CountVectorizerrefer to : http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
Apply some paramters :
# While creating a mapping from words to feature indices, we can ignore
# some words by providing a stop word list.
stop_words = stopwords.words('english')
count_v_sw = CountVectorizer(stop_words=stop_words)
sw_tdm = count_v_sw.fit_transform(sentences)
print "num of features/vocabulary :"
print len(count_v_sw.get_feature_names());
print "new tdm which removed stop_words : "
print sw_tdm;
# Use ngrams
count_v_ngram = CountVectorizer(stop_words=stop_words,ngram_range=(1,2))
ngram_tdm = count_v_ngram.fit_transform(sentences)
print "num of features/vocabulary :"
print len(count_v_ngram.get_feature_names());
print "ngram tdm which removed stop_words : "
print ngram_tdm;
output :
num of features/vocabulary :
107
new tdm which removed stop_words :
(0, 40) 1
(0, 72) 1
: :
(5, 99) 1
(5, 15) 1
(5, 40) 1
(5, 14) 1
(5, 96) 1
num of features/vocabulary :
250
ngram tdm which removed stop_words :
(0, 96) 1
(0, 169) 1
: :
(5, 92) 1
(5, 33) 1
(5, 219) 1
term frequencies and inverse document frequencies
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
@author: snaildove
"""
# Load Libraries
from nltk.tokenize import sent_tokenize
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
# 1. We create an input document as in the previous recipe.
text = "Text mining, also referred to as text data mining, roughly equivalent to text analytics, refers to the process of deriving high-quality information from text. Highquality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interestingness. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities).Text analysis involves information retrieval, lexical analysis to study word frequency distributions, pattern recognition, tagging/annotation, information extraction, data mining techniques including link and association analysis, visualization, and predictive analytics. The overarching goal is, essentially, to turn text into data for analysis, via application of natural language processing (NLP) and analytical methods.A typical application is to scan a set of documents written in a natural language and either model the document set for predictive classification purposes or populate a database or search index with the information extracted."
#Let’s see how to find the term frequency and inverse document frequency:
# 2. Let us extract the sentences.
sentences = sent_tokenize(text);
print "num of sentences :";
print len(sentences);
# 3. Create a matrix of term document frequency.
stop_words = stopwords.words('english');
count_v = CountVectorizer(stop_words=stop_words);
tdm = count_v.fit_transform(sentences);
print "vocabulary: "
print count_v.vocabulary_;
print "tdm : "
print tdm;
#4. Calcuate the TFIDF score.
tfidf = TfidfTransformer();
tdm_tfidf = tfidf.fit_transform(tdm);
print "tf-idf :";
print tdm_tfidf.data;
output :
num of sentences :
6
vocabulary:
{u'nlp': 58, u'named': 56, u'concept': 13, u'interpretation': 44, u'features': 30, u'classification': 10,
u'text': 96, ..............................................................................................
,u'usually': 101, u'model': 54, u'typically': 100, u'retrieval': 80, u'involves': 45, u'typical': 99}
tdm :
(0, 40) 1
(0, 72) 1
: :
(0, 96) 4
(1, 47) 1
: :
(5, 14) 1
(5, 96) 1
tf-idf :
[ 0.54105639 0.31326362 0.26401921 ..., 0.15746858 0.15746858
0.15746858]