一、常见错误
HTTPConnectionPool(host:XX)Max retries exceeded with url:
如何让请求结束后马上断开连接且释放池中的连接资源:headers={ 'Connection':'close'}
使用代理ip:requests.get(url=url,headers=headers,proxies={'https':'134.209.13.16:8080'}).text
二、站长素材建立模板爬取
#爬取站长素材中的免费建立模板
import requests
from lxml import etree
import random
headers = {
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36',
'Connection':'close'
}
url_page_one = 'http://sc.chinaz.com/jianli/free.html'
#定制了一个通用的url模板
url_demo = 'http://sc.chinaz.com/jianli/free_%d.html'
start_page = int(input('enter a start page num:'))
end_page = int(input('enter a end page num:'))
for pageNum in range(start_page,end_page+1):
if pageNum == 1:
url = url_page_one
else:
url = format(url_demo%pageNum)
response = requests.get(url=url,headers=headers)
response.encoding = 'utf-8'
page_text = response.text
#解析:简历详情页的url,和名称
tree = etree.HTML(page_text)
div_list = tree.xpath('//div[@id="container"]/div')
for div in div_list:
name = div.xpath('./p/a/text()')[0]
detail_url = div.xpath('./p/a/@href')[0]
#对详情页的url发起请求,获取详情页的源码数据
detail_page_text = requests.get(url=detail_url,headers=headers).text
#对详情页的源码数据进行解析:下载地址对应的url
tree = etree.HTML(detail_page_text)
li_list = tree.xpath('//div[@class="clearfix mt20 downlist"]/ul/li')
#随机选取一个li标签(li标签中包含了下载地址的url)
li = random.choice(li_list)
download_url = li.xpath('./a/@href')[0]
#进行简历数据的下载
data = requests.get(url=download_url,headers=headers).content
name = name+'.rar'
with open(name,'wb') as fp:
fp.write(data)
print(name,'下载成功!')
解决HTTPConnectionPool超时重试问题与使用代理爬取
本文介绍了在爬虫过程中遇到的HTTPConnectionPool Max retries exceeded错误,提出通过设置headers中的'Connection':'close'来立即断开连接并释放资源的解决方案。此外,还讲解了如何利用代理IP进行请求,以避免被目标网站封禁。
2193

被折叠的 条评论
为什么被折叠?



