爬虫学习整理(2)数据解析——XPath语法和lxml模块

什么是XPath?

xpath(XML Path Language)是一门在XML和HTML文档中查找信息的语言,可用来在XML和HTML文档中对元素和属性进行遍历。

XPath开发工具

Chrome插件XPath Helper。
Firefox插件Try XPath。

XPath节点

在 XPath 中,有七种类型的节点:元素、属性、文本、命名空间、处理指令、注释以及文档(根)节点。XML 文档是被作为节点树来对待的。树的根被称为文档节点或者根节点。

XPath语法

使用方式:

使用 // 获取整个页面当中的元素,然后写标签名,再写谓语进行提取,比如:

//title[@lang='en']

需要注意:

  1. /和//的区别:/代表只获取子节点,//获取子孙节点,一般//用的比较多,当然也要视情况而定

  2. contains:有时候某个属性中包含了多个值,那么可以使用contains函数,示例如下:

//title[contains(@lang,'en')]
  1. 谓词中下标是从1开始的,不是从0开始的

lxml库

lxml 是 一个HTML/XML的解析器,主要的功能是如何解析和提取 HTML/XML 数据。

基本使用:

text = '''
<div>
    <ul>
         <li class="item-0"><a href="link1.html">first item</a></li>
         <li class="item-1"><a href="link2.html">second item</a></li>
         <li class="item-inactive"><a href="link3.html">third item</a></li>
         <li class="item-1"><a href="link4.html">fourth item</a></li>
         <li class="item-0"><a href="link5.html">fifth item</a>
     </ul>
 </div>
'''
# 将字符串解析为html文档
html = etree.HTML(text)
print(html)
# 按字符串序列化html
result = etree.tostring(html).decode('utf-8')
print(result)

从文件中读取html代码:

#读取
html = etree.parse('hello.html')

result = etree.tostring(html).decode('utf-8')
print(result)

在lxml中使用xpath语法

<!-- hello.html -->
<div>
    <ul>
         <li class="item-0"><a href="link1.html">first item</a></li>
         <li class="item-1"><a href="link2.html">second item</a></li>
         <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
         <li class="item-1"><a href="link4.html">fourth item</a></li>
         <li class="item-0"><a href="link5.html">fifth item</a></li>
     </ul>
 </div>

语法练习:

from lxml import etree
html = etree.parse('hello.html')
# 获取所有li标签:
result = html.xpath('//li')
print(result)
for i in result:
    print(etree.tostring(i))
# 获取所有li元素下的所有class属性的值:
result = html.xpath('//li/@class')
print(result)
# 获取li标签下href为www.baidu.com的a标签:
result = html.xpath('//li/a[@href="www.baidu.com"]')
print(result)
# 获取li标签下所有span标签:
result = html.xpath('//li//span')
print(result)
# 获取li标签下的a标签里的所有class:
result = html.xpath('//li/a//@class')
print(result)
# 获取最后一个li的a的href属性对应的值:
result = html.xpath('//li[last()]/a/@href')
print(result)
# 获取倒数第二个li元素的内容:
result = html.xpath('//li[last()-1]/a')
print(result)
print(result[0].text)
# 获取倒数第二个li元素的内容的第二种方式:
result = html.xpath('//li[last()-1]/a/text()')
print(result)

附使用lxml解析案例(瓜子二手车信息爬取)

import requests
import time
from lxml import etree

headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36',
'Cookie': 'antipas=2341D891930k307S96aso071t69K0; uuid=1b4989b6-ca05-41f3-a4a6-a88437e13376; ganji_uuid=7798217849299445565043; Hm_lvt_6c9e6e168b46aaa118c7ed52e9a02f43=1597292319; clueSourceCode=%2A%2300; user_city_id=176; guazitrackersessioncadata=%7B%22ca_kw%22%3A%22-%22%7D; sessionid=0ad7cfbe-8d34-4ea4-de96-d5b20d1d9a95; lg=1; Hm_lvt_936a6d5df3f3d309bda39e92da3dd52f=1598599344; close_finance_popup=2020-08-28; lng_lat=109.1936_34.3633; gps_type=1; cainfo=%7B%22ca_a%22%3A%22-%22%2C%22ca_b%22%3A%22-%22%2C%22ca_s%22%3A%22seo_baidu%22%2C%22ca_n%22%3A%22default%22%2C%22ca_medium%22%3A%22-%22%2C%22ca_term%22%3A%22-%22%2C%22ca_content%22%3A%22-%22%2C%22ca_campaign%22%3A%22-%22%2C%22ca_kw%22%3A%22-%22%2C%22ca_i%22%3A%22-%22%2C%22scode%22%3A%22-%22%2C%22keyword%22%3A%22-%22%2C%22ca_keywordid%22%3A%22-%22%2C%22display_finance_flag%22%3A%22-%22%2C%22platform%22%3A%221%22%2C%22version%22%3A1%2C%22client_ab%22%3A%22-%22%2C%22guid%22%3A%221b4989b6-ca05-41f3-a4a6-a88437e13376%22%2C%22ca_city%22%3A%22xa%22%2C%22sessionid%22%3A%220ad7cfbe-8d34-4ea4-de96-d5b20d1d9a95%22%7D; _gl_tracker=%7B%22ca_source%22%3A%22-%22%2C%22ca_name%22%3A%22-%22%2C%22ca_kw%22%3A%22-%22%2C%22ca_id%22%3A%22-%22%2C%22ca_s%22%3A%22self%22%2C%22ca_n%22%3A%22-%22%2C%22ca_i%22%3A%22-%22%2C%22sid%22%3A55404057526%7D; preTime=%7B%22last%22%3A1598599677%2C%22this%22%3A1597292313%2C%22pre%22%3A1597292313%7D; cityDomain=su; Hm_lpvt_936a6d5df3f3d309bda39e92da3dd52f=1598599692'

}


# 获取详情页面url
def get_detail_urls(url):
    resp = requests.get(url=url, headers=headers)
    text = resp.content.decode('utf-8')
    html = etree.HTML(text)
    ul = html.xpath('//ul[@class="carlist clearfix js-top"]')[0]
    # print(ul)
    lis = ul.xpath('./li')
    detail_urls = []
    for li in lis:
        detail_url = li.xpath('./a/@href')
        detail_url = 'https://www.guazi.com' + detail_url[0]
        detail_urls.append(detail_url)
    return detail_urls


# 解析详情页面内容
def parsing_detail_page(url):
    resp = requests.get(url=url, headers=headers)
    text = resp.content.decode('utf-8')
    html = etree.HTML(text)
    title = html.xpath('//div[@class="product-textbox"]/h2/text()')[0]
    title = title.replace(r'\r\n', '').strip()
    info = html.xpath('//div[@class="product-textbox"]/ul/li/span/text()')
    price = html.xpath('//div[@class="price-main"]/span/text()')[0]
    car = {}
    km = info[2]
    displacement = info[3]
    transmission = info[4]
    car['车型'] = title
    car['价格'] = price
    car['表显里程'] = km
    car['排量'] = displacement
    car['变速箱'] = transmission
    return car


# 保存数据
def save_data(car, f):
    f.write('{},{},{},{},{}\n'.format(car['车型'], car['价格'], car['表显里程'], car['排量'], car['变速箱']))


def main():
    basic_url = 'https://www.guazi.com/xa/buy/o{}/'
    with open('guazi_cs.csv', 'a', encoding='utf-8') as f:
        for i in range(1, 6):
        	time.sleep(1)
            url = basic_url.format(i)
            # 获取详情页面url
            detail_urls = get_detail_urls(url)
            # 解析详情页面内容
            for detail_url in detail_urls:
                car = parsing_detail_page(detail_url)
                save_data(car, f)


if __name__ == '__main__':
    main()

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值