爬虫(一)

本文将介绍Python爬虫的基本概念和入门步骤,包括安装必要的库如requests和BeautifulSoup,解析网页HTML,抓取数据以及如何处理反爬策略。通过实例讲解,带你走进爬虫的世界。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

 

import urllib.request
import os
import random


def open_url(url):
    req = urllib.request.Request(url)
    req.add_header("User-Agent",
                   "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 Edge/16.16299")

    # 使用代理ip
    # proxiex=['119.6.144.70:81','111.1.36.9:80','203.144.144.162.8080']
    # proxy=random.choice(proxiex)
    # 
    # proxy_support=urllib.request.ProxyHandler({'http':proxy})
    # opener=urllib.request.build_opener(proxy_support)
    # urllib.request.install_opener(opener)

    response = urllib.request.urlopen(req)
    html = response.read()
    return html


def get_page(url):
    pass


def find_imgs(url):
    html = open_url(url).decode('utf-8')

    img_addrs = []

    a = html.find('img src=')

    # //wx1.sinaimg.cn/mw600/0076BSS5ly1g8v93qvajhj30bh0kumy8.jpg
    # //wx2.sinaimg.cn/large/0076BSS5ly1g8v998imasg306y07yx6q.gif
    while a != -1:
        b = html.find('.jpg', a, a + 100)
        if b != -1:
            img_addrs.append(html[a + 9:b + 4])
        else:
            b = a + 9
        a = html.find('img src=', b)
    return img_addrs


def save_imgs(img_addrs):
    for each in img_addrs:
        filename = each.split('/')[-1]
        with open(filename, 'wb') as f:
            # <img src="http://wx4.sinaimg.cn/mw600/0076BSS5ly1g8v6cc4mo0j30u011i0xo.jpg">
            # //wx4.sinaimg.cn/mw600/0076BSS5ly1g8v6cc4mo0j30u011i0xo.jpg
            url = "http:" + each
            f.write(open_url(url))


def download(folder="ooxxxx", pages=['w', 'g', 'Q', 'A']):
    os.mkdir(folder)
    os.chdir(folder)

    url = "http://jandan.net/ooxx"

    for i in pages:
        # http://jandan.net/ooxx/MjAxOTExMTItNw==#comments
        page_url = url + "/MjAxOTExMTItN" + 'w' + "==#comments"
        img_addrs = find_imgs(page_url)
        save_imgs(img_addrs)


if __name__ == '__main__':
    download()
    print('OK')

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值