在无意中看到一个博客,https://blog.youkuaiyun.com/fwj_ntu/article/details/78237223
于是就想起了,很多年的夙愿就是学习一下爬虫,既然有现成的源代码,那就直接运行了,不过这个过程还是比较曲折
(当然和本人不熟悉Python有关)
1. UTF-8的问题:
#-*-coding:utf-8-*-
2. request库
https://blog.youkuaiyun.com/neil4/article/details/54292873
安装时遇到:
Traceback (most recent call last):
File "setup.py", line 10, in <module>
from setuptools import setup
ImportError: No module named setuptools
先去 http://pypi.python.org/packages/source/s/setuptools/setuptools-0.6c11.tar.gz
下载了setuptools包,然后phtyhon setup.py install将其安装好
仔细查看,发现request对certifi是有要求的:
https://pypi.python.org/pypi/certifi
学习https://blog.youkuaiyun.com/lyj_viviani/article/details/70568434
安装pip,方便一点
提示:
PS D:\IT_Software\python_2_x_x\python\Scripts> .\pip install certifi
Requirement already satisfied: certifi in d:\it_software\python_2_x_x\python\lib\site-packages (2018.1.18)
requests 2.18.4 requires chardet<3.1.0,>=3.0.2, which is not installed.
requests 2.18.4 requires idna<2.7,>=2.5, which is not installed.
requests 2.18.4 requires urllib3<1.23,>=1.21.1, which is not installed.
逐一安装,import requests这个第三方库不报错了。
接下来的一个问题是:找不到urllib.request库,通过对比试验,以下源代码需要在python 3.x环境运行
Python因为一些版本问题,所以urllib之类的库让我这种新人很迷糊
不过发现url好像有点问题,HTTP响应不正常。
稍微分析下源代码:
# 抓取网址:http://data.wxb.com/rank
import requests
import json
import urllib.request
# 构造HTTP头部
headers = {"Accept": "application/json, text/plain, */*",
"Accept-Encoding": "gzip, deflate",
"Cache-Control": "no-cache",
"Connection": "close",
"Cookie": "PHPSESSID=tp0vt9ahpnbku996vvuretjkc0; visit-wxb-id=837b20bccb3f77a1a8e5b6df0c4c4f20; IESESSION=alive; _qddamta_4009981236=3-0; pgv_pvi=2412895232; pgv_si=s7791044608; tencentSig=5037132800; wxb_fp_id=1381177533; Hm_lvt_5859c7e2fd49a1739a0b0f5a28532d91=1504086148; Hm_lpvt_5859c7e2fd49a1739a0b0f5a28532d91=1504087022; _qddaz=QD.sb0qu8.355lwa.j6yu23d8; _qdda=3-1.1; _qddab=3-996f34.j6yu23da",
"Referer": "http://data.wxb.com/searchResult?kw=%E8%AF%81%E5%88%B8",
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.109 Safari/537.36",
"X-Postman-Interceptor-Id": "6c06f41b-ba61-17a1-9c50-fb545d4753e7",
"X-Requested-With": "XMLHttpRequest"}
url = 'http://data.wxb.com/search'
# 结果写入文件
flow = open('Wechat public platforms.json', 'w')
mydata = []
for i in range(0, 25):
myParams = {'page': i,
'page_size': 10,
'kw': '数据分析',
'category_id' : '',
'start_rank': '*',
'end_rank' : '*',
'fans_min': '',
'fans_max': '',
'sort' : '',
'is_verify' : 0,
'is_original' : 0,
'is_continuous' : 0}
# 带参数,以及头部,发送http请求给对应URL
req = requests.get(url,params=myParams, headers=headers)
mydata.extend(req.json()['data'])
#将返回的data中的键值对加入到list中
json.dump(mydata, flow)#写入json文件中
flow.close()
print(len(mydata))
由于URL有问题,暂时看不到正常运行的结果,不过也算本人体验爬虫的第一小步吧。