采用Python3环境,并通过pip装几个包:
大概是我的系统环境变量没有设置好吧,我看网上都是直接pip install xxx,我的还要多加python -m 指令
设置好后,在python文件夹的Scripts里找到jupyter notebook.exe,打开,复制地址换浏览器需要输入服务器密码,按照网上教程我也没有修改成功,不知道原因,待解决吧。
右上角New-python3,开始编写属于你的第一个爬虫吧。
简单爬新浪网:
import requests
res = requests.get('http://mobile.sina.com.cn/')
res.encoding = 'utf-8'
print(res.text)
这里get的原理在 http://study.163.com/my 网络爬虫实战中讲到,选择Chrome浏览器,右键检查,选择Doc,在Preview可以查看内容(需要F5重新爬取),是否爬取到内容
其他小试:
from bs4 import BeautifulSoup
html_sample = '\
<html> \
<head> \
<meta charset="utf-8"> \
<title></title> \
</head> \
<body> \
<h1> This is a test</h1> \
</body> \
</html>'
soup = BeautifulSoup(html_sample,'html.parser')
header = soup.select('h1')
print(header)
print(header[0].text)
from bs4 import BeautifulSoup
html_sample = '\
<!DOCTYPE html> \
<html> \
<head> \
<meta charset="utf-8"> \
<title></title> \
</head> \
<body> \
<h1 id = "title">C++</h1> \
<h1 id = "title">Python</h1> \
<a href="#" class = "link"> This is link2 </a> \
</body> \
</html> '
soup = BeautifulSoup(html_sample,'html.parser')
alink = soup.select('#title')
print(alink)
print('\n')
for link in alink:
print(link)
print('\n')
for link in alink:
print(link.text)
输出:
[<h1 id="title">C++</h1>, <h1 id="title">Python</h1>] <h1 id="title">C++</h1> <h1 id="title">Python</h1> C++ Python