python爬虫多次请求超时的几种重试方式
在python爬虫中,请求超时代理错误等报错很是常见,下面总结了几种requests的方式:
第一种
headers = Dict()
url = 'https://www.baidu.com'
try:
proxies = None
response = requests.get(url, headers=headers, verify=False, proxies=None, timeout=3)
except:
# logdebug('requests failed one time')
try:
proxies = None
response = requests.get(url, headers=headers, verify=False, proxies=None, timeout=3)
except:
# logdebug('requests failed two time')
print('requests failed two time')
总结 :代码比较冗余,重试try的次数越多,代码行数越多,但是打印日志比较方便
第二种
def requestDemo(url,):
headers = Dict()
trytimes = 3 # 重试的次数
for i in range(trytimes):
try:
proxies = None
response = requests.get(url, headers=headers, verify=False, proxies=None, timeout=3)
# 注意此处也可能是302等状态码
if response.status_code == 200:
break
except:
# logdebug(f'requests failed {i}time')
print(f'requests failed {i} time')
总结 :遍历代码明显比第一个简化了很多,打印日志也方便
第三种
def requestDemo(url, times=1):
headers = Dict()
try:
proxies = None
response = requests.get(url, headers=headers, verify=False, proxies=None, timeout=3)
html = response.text()
# todo 此处处理代码正常逻辑
pass
return html
except:
# logdebug(f'requests failed {i}time')
trytimes = 3 # 重试的次数
if times < trytimes:
times += 1
return requestDemo(url, times)
return 'out of maxtimes'
总结 :迭代 显得比较高大上,中间处理代码时有其它错误照样可以进行重试; 缺点 不太好理解,容易出错,另外try包含的内容过多时,对代码运行速度不利。
第四种
@retry(3) # 重试的次数 3
def requestDemo(url):
headers = Dict()
proxies = None
response = requests.get(url, headers=headers, verify=False, proxies=None, timeout=3)
html = response.text()
# todo 此处处理代码正常逻辑
pass
return html
def retry(times):
def wrapper(func):
def inner_wrapper(*args, **kwargs):
i = 0
while i < times:
try:
print(i)
return func(*args, **kwargs)
except:
# 此处打印日志 func.__name__ 为say函数
print("logdebug: {}()".format(func.__name__))
i += 1
return inner_wrapper
return wrapper
总结 :装饰器优点 多种函数复用,使用十分方便。
第五种
万能python包中之一的retry模块,自行百度,此处就不介绍了。