This week, we focus on text classification
tips: text classification can be used to sentiment analysis.
text preprocessing
Tokenization
How to process text depends on what you think of text as.
a sequence of
- characters
- words
- phrases and named entities
- sentences
- paragraphs
Here, we think of text as a sequence of words because we reckon that a word is a meaningful sequence of characters.
Therefore, we should extract all words from a sentence.This process is called tokenization. So what’s the boundary of words?
Here, we mainly talk about English.
In English we can split a sentence by spaces or punctuation.
Three methods of tokenization are built in Python ntlk liberary.
- whitespace tokenizer
- puctuation tokenizer
- treebankword tokenizer

Normalization
-
stemming

-
lemmatization


transforming tokens into features / text to text vector
== bag of words==
- count occurrences of a particular token in our text

problems:
- loose word order
- counters are not normalized
- so, for word order, we count token pairs, triplets,etc. n-gram
- therefore, there are too many features
- then, we remove some n-grams based on their occurrence frequency in documents of our corpus(df).(remove too high or too low)
- and then, all features we have are moderately appearing among documents of our corpus. Next, we should focus on the value of feature columns.–or term frequency.

- and then more accurately, we can get df in detail, not just medium df.
-
Then, we multiply them together as the value of feature column.

python code

For now, you have vectorize your text by using a couple of numbers. You still don’t do text classification. The simplist way is logestic regression.
本文探讨了文本分类中的预处理技术,如分词、标准化,并介绍了如何将文本转换为特征向量,包括词袋模型及其存在的问题,以及如何通过n-gram改进词序考虑。
1690

被折叠的 条评论
为什么被折叠?



