本书第七章,有一段关于ngram模型的介绍,作者用2gram来详细讲解模型。
在数据标准化模块里,作者为了对数据的去重和频率,引入了collections库的OrderedDict函数。其功能是将指定的字典依照value值进行排序。不过作者并未写出完整代码,所以单纯补上那一段函数调用代码是没用的,要补填代码。
怕忘,就选择了即时记录。遇到,解决了就记录下来,并且贴上我的理解。
完整代码:
from urllib.request import urlopen
from bs4 import BeautifulSoup
from collections import OrderedDict
import re
import string
def isCommon(ngram):
commonWords = ["the", "be", "and", "of", "a", "in", "to", "have",
"it", "i", "that", "for", "you", "he", "with", "on", "do", "say",
"this", "they", "is", "an", "at", "but","we", "his", "from", "that",
"not", "by", "she", "or", "as", "what", "go", "their","can", "who",
"get", "if", "would", "her", "all", "my", "make", "about", "know",
"will","as", "up", "one", "time", "has", "been", "there", "year", "so",
"think", "when", "which", "them", "some", "me", "people", "take", "out",
"into", "just", "see", "him", "your", "come", "could", "now", "than",
"like", "other", "how", "then", "its", "our", "two", "more", "these",
"want", "way", "look", "first", "also", "new", "because", "day", "more",
"use", "no", "man", "find", "here", "thing", "give", "many", "well"]
if ngram in commonWords:
return True
else:
return False
def cleanInput(input):
input = re.sub('\n+', " ", input)
input = re.sub('\[[0-9]*\]', "", input)
input = re.sub(" +", " ", input)
input = bytes(input, "UTF-8")
input = input.decode("ascii", "ignore")
cleanInput = []
input = input.split(' ') # 用空格去分割字符和字符
for item in input:
item = item.strip(string.punctuation)
if len(item) > 1 or (item.lower() == 'a' or item.lower() == 'i'):
cleanInput.append(item)
return cleanInput
def ngrams(input, n):
input = cleanInput(input)
output = {}
for i in range(len(input)-n+1):
ngramTemp = " ".join(input[i:i+n]).encode('utf-8')
if isCommon(ngramTemp.split()[0]) or isCommon(ngramTemp.split()[1]):
pass
else:
if ngramTemp not in output:
output[ngramTemp] = 0
output[ngramTemp] += 1
return output
html = urlopen("http://en.wikipedia.org/wiki/Python_(programming_language)")
bsObj = BeautifulSoup(html)
content = bsObj.find("div", {"id": "mw-content-text"}).get_text()
ngrams = ngrams(content, 2)
ngrams = OrderedDict(sorted(ngrams.items(), key=lambda t: t[1], reverse=True))
print(ngrams)
其中isCommon函数的作用是将采集到的数据整理,去除常用单词,以免采集到很混乱的组合。
cleanInput作用是将采集到的数据去除多余的字符和多余的空格进行整理。
ngrams函数的作用是通过指定的模型范围(就是两个词,或三个词这样收集,以此类推)将采集到的二维数组,添加至字典key值,并将频率记录到value。
书上缺失的代码是ngrams模块里,将列表转换成字典并记录频率的那一块。顺带添加了去除无用单词的模块isCommon。
我原本准备写一个将列表转换成字典,再衔接书上的OrderedDict函数调用。尝试了很久,没解决,后来尝试搜索ngram模型代码,因为突然想到,这么有名的模型,一定有很多人用过,并且记录,于是顺利找到,并且在此附上自己对这段添加代码的理解,因为这是我想了很久没想出来的,所以,不需要的可以跳过下面这部分。
思路清理
这里的思路是将二维数组看成一维数组,并且将每一个单元的两个单词,以空格为分隔符连接赋值给新的字符串。接下来,排除常用单词,排除后,进行频率记录,如果没有,对字典进行赋值记录即可,如果有,将指定key值所对应的value值+1。到此完成了对采集数据的整理环节,后面的OrderedDict函数就起到了通过value进行排序的功能了。