任务描述
学习什么是IP,为什么会出现IP被封,如何应对IP被封的问题。
抓取西刺代理,并构建自己的代理池。
西刺直通点:https://www.xicidaili.com/ 。
参考资料:https://blog.youkuaiyun.com/weixin_43720396/article/details/88218204
如何应对IP被封的问题
- 伪造User-Agent
在请求头中把User-Agent设置成浏览器中的User-Agent,来伪造浏览器访问。
还可以先收集多种浏览器的User-Agent,每次发起请求时随机从中选一个使用,可以进一步提高安全性。
使用如下:
user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36'
headers = {'User-Agent': user_agent}
url = ''
r = requests.get(url, headers=headers)
- 在每次重复爬取之间设置一个随机时间间隔
time.sleep(random.randint(0,3)) # 暂停0~3秒的整数秒,时间区间:[0,3]
time.sleep(random.random()) # 暂停0~1秒,时间区间:[0,1)
- 伪造cookies
若从浏览器中可以正常访问一个页面,则可以将浏览器中的cookies复制过来使用,比如:
cookies = dict(uuid='b18f0e70-8705-470d-bc4b-09a8da617e15',UM_distinctid='15d188be71d50-013c49b12ec14a-3f73035d-100200-15d188be71ffd')
resp = requests.get(url,cookies = cookies)
# 把浏览器的cookies字符串转成字典
def cookies2dict(cookies):
items = cookies.split(';')
d = {}
for item in items:
kv = item.split('=',1)
k = kv[0]
v = kv[1]
d[k] = v
return d
注意: 用浏览器cookies发起请求后,如果请求频率过于频繁仍会被封IP,这时可以在浏览器上进行相应的手工验证(比如点击验证图片等),然后就可以继续正常使用该cookies发起请求。
- 使用代理
可以换着用多个代理IP来进行访问,防止同一个IP发起过多请求而被封IP,比如:
proxies = {'http':'http://10.10.10.10:8765','https':'https://10.10.10.10:8765'}
resp = requests.get(url,proxies = proxies)
# 注:免费的代理IP可以在这个网站上获取:http://www.xicidaili.com/nn/
抓取西刺代理,并构建自己的代理池
代码来自参考资料:https://blog.youkuaiyun.com/weixin_43720396/article/details/88218204 里面有详细解释
from bs4 import BeautifulSoup
import requests
import re
import json
def open_proxy_url(url):
user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36'
headers = {'User-Agent': user_agent}
try:
r = requests.get(url, headers=headers, timeout=10)
r.raise_for_status()
r.encoding = r.apparent_encoding
return r.text
except:
print('无法访问网页' + url)
def get_proxy_ip(response):
proxy_ip_list = []
soup = BeautifulSoup(response, 'html.parser')
proxy_ips = soup.find(id='ip_list').find_all('tr')
for proxy_ip in proxy_ips:
if len(proxy_ip.select('td')) >= 8:
ip = proxy_ip.select('td')[1].text
port = proxy_ip.select('td')[2].text
protocol = proxy_ip.select('td')[5].text
if protocol in ('HTTP', 'HTTPS', 'http', 'https'):
proxy_ip_list.append(f'{protocol}://{ip}:{port}')
return proxy_ip_list
def open_url_using_proxy(url, proxy):
user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36'
headers = {'User-Agent': user_agent}
proxies = {}
if proxy.startswith(('HTTPS', 'https')):
proxies['https'] = proxy
else:
proxies['http'] = proxy
try:
r = requests.get(url, headers=headers, proxies=proxies, timeout=10)
r.raise_for_status()
r.encoding = r.apparent_encoding
return (r.text, r.status_code)
except:
print('无法访问网页' + url)
print('无效代理IP: ' + proxy)
return False
def check_proxy_avaliability(proxy):
url = 'http://www.baidu.com'
result = open_url_using_proxy(url, proxy)
VALID_PROXY = False
if result:
text, status_code = result
if status_code == 200:
r_title = re.findall('<title>.*</title>', text)
if r_title:
if r_title[0] == '<title>百度一下,你就知道</title>':
VALID_PROXY = True
if VALID_PROXY:
check_ip_url = 'https://jsonip.com/'
try:
text, status_code = open_url_using_proxy(check_ip_url, proxy)
except:
return
print('有效代理IP: ' + proxy)
with open('valid_proxy_ip.txt', 'a') as f:
f.writelines(proxy)
try:
source_ip = json.loads(text).get('ip')
print(f'源IP地址为:{source_ip}')
print('=' * 40)
except:
print('返回的非json,无法解析')
print(text)
else:
print('无效代理IP: ' + proxy)
if __name__ == '__main__':
proxy_url = 'https://www.xicidaili.com/'
proxy_ip_filename = 'proxy_ip.txt'
text = open_proxy_url(proxy_url)
with open(proxy_ip_filename, 'w') as f:
f.write(text)
text = open(proxy_ip_filename, 'r').read()
proxy_ip_list = get_proxy_ip(text)
proxy_ip_list.insert(0, 'http://172.16.160.1:3128') #我自己的代理服务器
for proxy in proxy_ip_list:
check_proxy_avaliability(proxy)