相比较Python标准库中的urllib模块来说,Requests模块的API使用起来更加友好,其继承了urllib的所有特性,并且支持HTTP连接保持和连接池,支持cookie保持会话,支持文件上传,支持自动确定响应内容的编码,支持国际化的URL和POST数据自动编码。
开源地址:https://github.com/kennethreitz/requests。用到的同学去给个star吧。
基本的GET请求
import requests
#最基本的GET请求可以直接使用get方法
response = requests.get("http://www.baidu.com/")
#添加headers和查询参数
kw = {'wd':'詹姆斯'}
headers = {
'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'
}
url = "http://www.baidu.com/"
response = requests.get(url,params=kw,headers=headers)
#查看相应内容,response.text返回的是Unicode格式的数据
print(response.text)
#response.content返回的是字节流的数据
print(response.content)
#查看完整URL地址
print(response.url)
#查看完整url地址
print(response.url)
#查看响应头部字符编码
print(response.encoding)
#查看响应码
print(response.status_code)
#查看报头
print(response.request.headers)
基本的POST请求
import requests
headers = {"User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"}
data = {
"from": "en",
"to": "zh",
"query": "i love python",
"simple_means_flag": "3",
"sign": "380634.77291",
"token": "fe28585d5933e4976b148c2ad934e01d",
}
url = "https://fanyi.baidu.com/#en/zh/i%20love%20python"
response = requests.post(url,data=data,headers=headers)
print(response.text)
#json文件可以直接显示
# print(response.json())
代理(proxies参数)
import requests
#根据协议类型,选择不同代理
proxies = {
'http':'http://12.34.56.79:9527',
'https':'http://12.34.56.79:9527',
}
headers = {"User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"}
responst = requests.get("http://www.baidu.com/",proxies=proxies,headers=headers,timeout=3)
print(responst.text)
'''
也可以通过本地环境变量 HTTP_PROXY 和 HTTPS_PROXY 来配置代理:
export HTTP_PROXY="http://12.34.56.79:9527"
export HTTPS_PROXY="https://12.34.56.79:9527"
'''
私密代理
import requests
# 如果代理需要使用HTTP Basic Auth,可以使用下面这种格式:
proxy = { "http": "mr_mao_hacker:sffqry9r@61.158.163.130:16816" }
response = requests.get("http://www.baidu.com", proxies = proxy)
print (response.text)
web客户端验证
import requests
auth=('test', '123456')
response = requests.get('http://192.168.199.107', auth = auth)
print (response.text)
Cookies
import requests
response = requests.get("http://www.baidu.com/")
# 7\. 返回CookieJar对象:
cookiejar = response.cookies
# 8\. 将CookieJar转为字典:
cookiedict = requests.utils.dict_from_cookiejar(cookiejar)
print (cookiejar)
print (cookiedict)
session实现人人网登录
import requests
# 1\. 创建session对象,可以保存Cookie值
ssion = requests.session()
# 2\. 处理 headers
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"}
# 3\. 需要登录的用户名和密码
data = {"email":"mr_mao_hacker@163.com", "password":"alarmchime"}
# 4\. 发送附带用户名和密码的请求,并获取登录后的Cookie值,保存在ssion里
ssion.post("http://www.renren.com/PLogin.do", data = data)
# 5\. ssion包含用户登录后的Cookie值,可以直接访问那些登录后才可以访问的页面
response = ssion.get("http://www.renren.com/410043129/profile")
# 6\. 打印响应内容
print (response.text)
处理HTTPS请求 SSL证书验证
如果SSL证书验证不通过,或者不信任服务器的安全证书,则会报出SSLError,据说 12306 证书是自己做的
import requests
response = requests.get("https://www.baidu.com/", verify=True)
# 也可以省略不写
# response = requests.get("https://www.baidu.com/")
print (r.text)
#想跳过验证的话就将verify修改为false
r = requests.get("https://www.12306.cn/mormhweb/", verify = False)