一、 重要方法
①获取一个get请求:
import urllib.request
response = urllib.request.urlopen("http://baidu.com")
print(response.read().decode('utf-8'))
②获取一个post请求:
import urllib.parse
data = bytes(urllib.parse.urlencode({"name": "hahahhaha", "sex":"boy"}), encoding = "utf-8")
response = urllib.request.urlopen("http://httpbin.org/post", data= data)
print(response.read().decode("utf-8"))
③超时处理:
import urllib.request
try:
response = urllib.request.urlopen("http://douban.com")
print(response.read().decode("utf-8"))
except urllib.error.URLError as e:
print("访问超时")
④获取响应标头
import urllib.request
response = urllib.request.urlopen("http://www.baidu.com")
print(response.getheaders())
如果说 想要表头中某一个 数据:
import urllib.request
response = urllib.request.urlopen("http://www.baidu.com")
print(response.getheader("Server"))
========
BWS/1.1
⑤ 当爬虫被发现,不可访问时 ,可伪装用户:(举例,访问豆瓣时)
import urllib.request
url = "http://www.douban.com"
headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36 Edg/84.0.522.59'}
#构建一个已经封装好的request对象:
req = urllib.request.Request(url= url, headers= headers)
#获取响应对象:
response = urllib.request.urlopen(req)
html = response.read().decode('utf-8')
print(html)
#f = open("test2.html","w",encoding='utf-8')
#f.write( str(html) )
#f.close()