问题来源,因为自己正在学习,而这份代码恰好因为许多的原因而不能够运行,所以我就趁着这个计划将其修正,加深自己对于知识的理解和增值见识。
最初的代码
#-*-coding:utf8-*-
from lxml import etree
from multiprocessing.dummy import Pool as ThreadPool
import requests
import json
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
'''重新运行之前请删除content.txt,因为文件操作使用追加方式,会导致内容太多。'''
def towrite(contentdict):
f.writelines(u'回帖时间:' + str(contentdict['topic_reply_time']) + '\n')
f.writelines(u'回帖内容:' + unicode(contentdict['topic_reply_content']) + '\n')
f.writelines(u'回帖人:' + contentdict['user_name'] + '\n\n')
def spider(url):
html = requests.get(url)
selector = etree.HTML(html.text)
content_field = selector.xpath('//div[@class="l_post l_post_bright "]')
item = {}
for each in content_field:
reply_info = json.loads(each.xpath('@data-field')[0].replace('"',''))
author = reply_info['author']['user_name']
content = each.xpath('div[@class="d_post_content_main"]/div/cc/div[@class="d_post_content j_d_post_content "]/text()')[0]
reply_time = reply_info['content']['date']
print content
print reply_time
print author
item['user_name'] = author
item['topic_reply_content'] = content
item['topic_reply_time'] = reply_time
towrite(item)
if __name__ == '__main__':
pool = ThreadPool(4)
f = open('content.txt','a')
page = []
for i in range(1,21):
newpage = 'http://tieba.baidu.com/p/3522395718?pn=' + str(i)
page.append(newpage)
results = pool.map(spider, page)
pool.close()
pool.join()
f.close()
首先我查找了一些相关的知识的材料,先列举在这里:
- Python中的str与unicode处理方法http://python.jobbole.com/81244/
- Xpath的相关资料https://doc.scrapy.org/en/0.14/topics/selectors.html
- Json是什么http://blog.youkuaiyun.com/wishfly/article/details/7001460
- Json.loads的作用http://blog.youkuaiyun.com/wishfly/article/details/7001460
- Xpath的相关资料http://www.ruanyifeng.com/blog/2009/07/xpath_path_expressions.html