大批量爬取彼岸图网内容遇到的问题

本文介绍了一次使用多进程爬取彼岸图网图片的实战经历,针对爬虫过程中遇到的因网站Cookie机制导致的爬取失败问题进行了详细分析,并提出了相应的解决方案。

用多进程配套多进程短时间大量爬取图网25000张图.

问题:出现程序无结果不出问题也不报错,正常结束的异常,经多方检查调试.

原因:彼岸图网cookie隔30min刷新一次,用旧的cookie爬第一级页面只能爬到含有"跳转中"的源代码,导致xpath什么都解析不到,,而不用cookie也是如此.

解决:唯一解决办法是将旧的cookie换成新的cookie,若不用多线程多进程,则不会出现此问题.推测是因为cookie能跳过含有"跳转中"的页面,直接请求目标页面,应该不是网站站长主动反扒的措施,而就算用多进程套多线程,30min内只能爬到15000张图片,与目标数量相差甚远,所以换成使用selenium.目前正在尝试中....

这是第一版的代码,错误和算法问题最多,留录在此,方便往后再次观摩.--1.23记录

# -*- coding: UTF-8 -*-
"""
@Author: 王散 Creative
@Time: 2022/1/22 18:50
@IDE_Name/Software: PyCharm
@File: 应对彼岸图网的极限反爬
"""
import requests
from lxml import etree
import time
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
import threading
from multiprocessing import Lock


def task(url):
    # lock = threading.Lock()
    Squence = 0
    header = {
        "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.469"
                      "2.99 Safari/537.36",
        
    }
    resp1 = requests.get(url=url, headers=header)
    resp1.encoding = 'gbk'
    # print(resp1.text)
    tree = etree.HTML(resp1.text)
    analysis1 = tree.xpath('//*[@id="main"]/div[3]/ul/li//a/@href')
    analysis2 = tree.xpath('//*[@id="main"]/div[3]/ul/li/a/b/text()')
    for ItemTwo in analysis1:
        url_two_page = 'https://pic.netbian.com' + ItemTwo
        resp2 = requests.get(url=url_two_page, headers=header)
        # time.sleep(0.5)
        resp2.encoding = 'gbk'
        tree_two = etree.HTML(resp2.text)
        analysis3 = tree_two.xpath('//*[@id="img"]/img/@src')
        for ItemThree in analysis3:
            url_image_page = 'https://pic.netbian.com' + ItemThree
            resp3 = requests.get(url=url_image_page, headers=header)
            # lock.acquire()
            image_file = open(f'D:\python_write_file\爬虫NumberTwo\Image\彼岸网爬的好图2\\{analysis2[Squence]}.jpg', 'wb')
            image_file.write(resp3.content)
            image_file.close()
            # lock.release()
            print(f'{analysis2[Squence]}==>爬取完毕')
            # lock.acquire()
            Squence = Squence + 1
            # lock.release()


# def main(num):
#     with ThreadPoolExecutor(252) as exe_Pool:
#         for item in range(num, num+126):
#             lock.acquire()
#             if item == 1:
#                 exe_Pool.submit(task, 'https://pic.netbian.com/new/')
#                 lock.release()
#             else:
#                 exe_Pool.submit(task, f'https://pic.netbian.com/new/index_{item}.html')
#                 lock.release()
#         return num+30


if __name__ == "__main__":
    # num = 1
    # lock = Lock()
    with ProcessPoolExecutor(45) as Process_Pool:
        for item in range(1, 1261):
            if item == 1:
                Process_Pool.submit(task, 'https://pic.netbian.com/new/')
            else:
                Process_Pool.submit(task, f'https://pic.netbian.com/new/index_{item}.html')
        # for number in range(1, 11):
        #     lock.acquire()
        #     num = Process_Pool.submit(main, num)
        #     lock.release()

评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

王观天

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值