爬虫程序_改进

本文介绍了一个简单的Python爬虫程序,该程序用于从cnbeta网站抓取新闻标题和内容。通过使用requests库发起HTTP请求并利用lxml解析网页,程序能够自动化地收集指定页数的所有新闻。文中还包含了如何处理网页编码及文件保存的方法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >


import requests
from lxml import etree
import os

urls = []
num = 1


def get_urls(page_num):
    global urls
    headers = {
        'Upgrade-Insecure-Requests':'1',
        'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'
    }
    for num in range(1, page_num+1):
        try:
            url = 'https://m.cnbeta.com/wap/index.htm?page=' + str(num)
            data_list = requests.get(url,headers=headers )
            data_list.encoding = 'utf-8'
            data_html = etree.HTML(data_list.text)
            data_urls = data_html.xpath('//div[@id="info_list"]/div[@class="list"]/a//@href')
            # data_title = data_html.xpath('//div[@id="info_list"]/div[@class="list"]/a//text()')
            urls += data_urls
        except:
            print(data_urls + " : 获取失败...")
    print(urls)


def write_data(title, content):
    global num
    if not os.path.exists('./wenzhang'):
        os.makedirs('wenzhang')
    with open('wenzhang/' + 'cnbeta.txt', 'a', encoding='utf-8') as f:
        f.write(' ' + str(num) + ' -->  ' + title + '  <--\n\n')
        num += 1
        f.write(content + '\n\n--------------------------\n--------------------------\n\n\n')


def get_articles(urls):
    headers = {
        'Referer': 'https://m.cnbeta.com/wap/index.htm',
        'Upgrade-Insecure-Requests': '1',
        'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'
    }
    for url in urls:
        try:
            new_url = 'https://m.cnbeta.com' + url
            response = requests.get(new_url,headers=headers)
            response.encoding = "utf-8"
            response_html = etree.HTML(response.text)
            title = response_html.xpath('//div[@class="title"]/b//text()')
            print(title)
            content = response_html.xpath('//div[@class="content"]/p//text()')
            content_all = ''
            for content_x in content:
                content_all = content_all + "\n" + content_x
            write_data(title[0], content_all)
        except:
            print(url + "文章错误...")


print('''
这是一个爬虫程序,爬取的是www.cxxxa.com的wap手机版页面.
采集了文章标题,和文章正文.您可以选择你要的页数.(每页35条新闻)

''')
page_num = int(input("请输入您想得到几页的数据: "))
if __name__ == '__main__':
    get_urls(page_num)
    get_articles(urls)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值