006 Beautifulsoup

本文介绍BeautifulSoup库的基本使用方法,包括安装、解析库的选择、标签选择器等,并详细讲解了标准选择器和CSS选择器的使用。

1.什么是BeautifulSoup

灵活方便的网页解析库,处理高效,支持多种解析器。


利用它不用编写正则表达式即可方便的实现网页信息的提取。

2.安装

pip install beautifulsoup4

3.用法详解

1.Beautiful解析库

解析器使用方法优势劣势
Python标准库BeautifulSoup(markup, “html.parser”)
  • Python的内置标准库 - 执行速度适中 - 文档容错能力强
Python 2.7.3 or 3.2.2)前 的版本中文档容错能力差
lxml HTML 解析器BeautifulSoup(markup, “lxml”)速度快文档,容错能力强,需要安装C语言库
lxml XML 解析器BeautifulSoup(markup, [“lxml”, “xml”])BeautifulSoup(markup, “xml”)速度快,唯一支持XML的解析器需要安装C语言库
html5libBeautifulSoup(markup, “html5lib”)最好的容错性,以浏览器的方式解析文档,生成HTML5格式的文档速度慢,不依赖外部扩展

2.基本使用

我们创建一个字符串,后面的例子我们便会用它来演示

html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,"lxml")
print(soup.prettify())
print(soup.title.string)

3.标签选择器

1.选择元素

依旧使用上面的代码

from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.title)
print(type(soup.title))
print(soup.head)
print(soup.p) 

这种选择方式只会输出第一个选择结果,p标签有多个,但是结果只会返回到匹配的第一跟

2.获取名称
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.title.name)

:title
soup.title.name

3.获取属性
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.p.attrs['name'])
print(soup.p['name'])

soup.p.attrs[‘name’]或soup.p[‘name’]

4.获取内容
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.p.string)

soup.p.string 获取标签中的内容

5.嵌套选择
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.head.title.string)
6.子节点和子孙节点
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.p.contents)

.contents可以获取到标签的所有的子节点 返回值是一个列表

from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.p.children)    #.children是一个迭代器
for i,child in enumerate(soup.p.children):  #循环取到索引,节点内容
    print(i,child)
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.p.descendants)
for i,child in enumerate(soup.p.descendants):
    print(i,child)

.descendants可以获取到所有的子孙节点

7.父节点和先祖节点
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.a.parent)

.parent获取所有的父节点

from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(list(enumerate(soup.a.parents))) 

enumerate枚举法把结果转化成list
.parents获取所有的祖先节点

8.兄弟节点
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(list(enumerate(soup.a.next_siblings)))
print(list(enumerate(soup.a.previous_siblings)))

.next_siblings获取后一个兄弟节点
.previous_siblings获取前一个兄弟节点

4.标准选择器

1.find_all(name,recursive,text,**kwargs)

可根据标签名、属性、内容查找文档

2.find(name,attrs,recursive,text,**kwargs)

find_parents() find_parent()

find_parents()返回所有的祖先节点, find_parent()返回直接父节点

find_next_siblings() find_next_sibling()

find_next_siblings()返回后面所有兄弟节点,find_next_sibling()返回后面第一个兄弟节点

find_previous_siblings() find_previous_sibling()

find_previous_siblings()返回前面所有的兄弟节点,find_previous_sibling()返回前面第一个兄弟节点

find_all_next() find_next()

find_all_next() 返回节点后所有符合条件的节点,find_next()返回第一个符合条件的节点

find_all_previous() find_previous

find_all_previous()返回节点后所有符合条件的节点,find_previous()返回第一个符合条件的节点

5.CSS选择器

我们在写 CSS 时,标签名不加任何修饰,类名前加点,id名前加#,在这里我们也可以利用类似的方法来筛选元素,用到的方法是 soup.select(),返回类型是 list

1.通过标签名查找
print soup.select('title') 
print soup.select('a')
print soup.select('b')
2.通过类名查找
print soup.select('.sister')
3.通过 id 名查找
print soup.select('#link1')
4.组合查找

组合查找即和写class文件时,标签名与类名、id名进行的组合原理是一样的,例如查找p标签中,id 等于link1的内容,二者需要用空格分开

print soup.select('p #link1')

直接子标签查找

print soup.select("head > title")
5.属性查找

查找时还可以加入属性元素,属性需要用中括号括起来,注意属性和标签属于同一节点,所以中间不能加空格,否则会无法匹配到。

print soup.select('a[class="sister"]')
print soup.select('a[href="http://example.com/elsie"]')

同样,属性仍然可以与上述查找方式组合,不在同一节点的空格隔开,同一节点的不加空格

print soup.select('p a[href="http://example.com/elsie"]')

以上的 select 方法返回的结果都是列表形式,可以遍历形式输出,然后用 get_text() 方法来获取它的内容

6.总结

本篇内容比较多,把Beautiful Soup的方法进行了大部分整理和总结,不过这还不算完全,仍然有BeautifulSoup的修改删除功能,不过这些功能用得比较少,只整理了查找提取的方法,希望对大家有帮助!熟练掌握了 Beautiful Soup,一定会给你带来太多方便,加油吧!

^C Running command git fetch -q --tags Running command git reset --hard -q cc1be01b97b79b6afb7a35f164d5b2f14b00b50d Defaulting to user installation because normal site-packages is not writeable Looking in indexes: https://mirrors.aliyun.com/pypi/simple/ Obtaining embedding_dataset_reordering from git+https://github.com/Veldrovive/embedding-dataset-reordering.git@main#egg=embedding_dataset_reordering (from -r requirements.txt (line 52)) Updating e:\jupyter notebook\src\embedding-dataset-reordering clone (to revision main) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Collecting absl-py==1.0.0 (from -r requirements.txt (line 1)) Using cached https://mirrors.aliyun.com/pypi/packages/2c/03/e3e19d3faf430ede32e41221b294e37952e06acc96781c417ac25d4a0324/absl_py-1.0.0-py3-none-any.whl (126 kB) Collecting accelerate==0.25.0 (from -r requirements.txt (line 2)) Using cached https://mirrors.aliyun.com/pypi/packages/f7/fc/c55e5a2da345c9a24aa2e1e0f60eb2ca290b6a41be82da03a6d4baec4f99/accelerate-0.25.0-py3-none-any.whl (265 kB) Collecting addict==2.4.0 (from -r requirements.txt (line 3)) Using cached https://mirrors.aliyun.com/pypi/packages/6a/00/b08f23b7d7e1e14ce01419a467b583edbb93c6cdb8654e54a9cc579cd61f/addict-2.4.0-py3-none-any.whl (3.8 kB) Collecting aiohttp==3.8.1 (from -r requirements.txt (line 4)) Using cached https://mirrors.aliyun.com/pypi/packages/5a/86/5f63de7a202550269a617a5d57859a2961f3396ecd1739a70b92224766bc/aiohttp-3.8.1.tar.gz (7.3 MB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Requirement already satisfied: aiosignal==1.2.0 in e:\anaconda3\lib\site-packages (from -r requirements.txt (line 5)) (1.2.0) Collecting albumentations==1.3.0 (from -r requirements.txt (line 6)) Using cached https://mirrors.aliyun.com/pypi/packages/4f/55/3c2ce84c108fc1d422afd6de153e4b0a3e6f96ecec4cb9afcf0284ce3538/albumentations-1.3.0-py3-none-any.whl (123 kB) Collecting aniso8601==9.0.1 (from -r requirements.txt (line 7)) Using cached https://mirrors.aliyun.com/pypi/packages/e3/04/e97c12dc034791d7b504860acfcdd2963fa21ae61eaca1c9d31245f812c3/aniso8601-9.0.1-py2.py3-none-any.whl (52 kB) Collecting anndata==0.8.0 (from -r requirements.txt (line 8)) Using cached https://mirrors.aliyun.com/pypi/packages/46/7f/ffe1546142d98ed55e7bb70eaedad92861d8e2ab07398ef7f06f4f46d06d/anndata-0.8.0-py3-none-any.whl (96 kB) Collecting antlr4-python3-runtime==4.8 (from -r requirements.txt (line 9)) Using cached https://mirrors.aliyun.com/pypi/packages/56/02/789a0bddf9c9b31b14c3e79ec22b9656185a803dc31c15f006f9855ece0d/antlr4-python3-runtime-4.8.tar.gz (112 kB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Collecting anyio==3.5.0 (from -r requirements.txt (line 10)) Using cached https://mirrors.aliyun.com/pypi/packages/b1/ae/9a8af72d6f0c551943903eefcf93c3a29898fb7b594603c0d70679c199b1/anyio-3.5.0-py3-none-any.whl (79 kB) Requirement already satisfied: argon2-cffi-bindings==21.2.0 in e:\anaconda3\lib\site-packages (from -r requirements.txt (line 11)) (21.2.0) Requirement already satisfied: argon2-cffi==21.3.0 in e:\anaconda3\lib\site-packages (from -r requirements.txt (line 12)) (21.3.0) Collecting astroid==2.9.3 (from -r requirements.txt (line 13)) Using cached https://mirrors.aliyun.com/pypi/packages/1e/9f/b62617634a9b1c5992ecfb4d63190b9e37582d7dd6510ad9421b84eb9268/astroid-2.9.3-py3-none-any.whl (254 kB) Collecting async-timeout==4.0.2 (from -r requirements.txt (line 14)) Using cached https://mirrors.aliyun.com/pypi/packages/d6/c1/8991e7c5385b897b8c020cdaad718c5b087a6626d1d11a23e1ea87e325a7/async_timeout-4.0.2-py3-none-any.whl (5.8 kB) Collecting attrs==20.3.0 (from -r requirements.txt (line 15)) Using cached https://mirrors.aliyun.com/pypi/packages/c3/aa/cb45262569fcc047bf070b5de61813724d6726db83259222cd7b4c79821a/attrs-20.3.0-py2.py3-none-any.whl (49 kB) Collecting autofaiss==2.15.8 (from -r requirements.txt (line 16)) Using cached https://mirrors.aliyun.com/pypi/packages/f1/ae/04ffd80004a667a5b9521cdf3048f283171b0d3f64dc6f99bad397f47a62/autofaiss-2.15.8-py3-none-any.whl (70 kB) Collecting babel==2.9.1 (from -r requirements.txt (line 17)) Using cached https://mirrors.aliyun.com/pypi/packages/aa/96/4ba93c5f40459dc850d25f9ba93f869a623e77aaecc7a9344e19c01942cf/Babel-2.9.1-py2.py3-none-any.whl (8.8 MB) Collecting backcall==0.2.0 (from -r requirements.txt (line 18)) Using cached https://mirrors.aliyun.com/pypi/packages/4c/1c/ff6546b6c12603d8dd1070aa3c3d273ad4c07f5771689a7b69a550e8c951/backcall-0.2.0-py2.py3-none-any.whl (11 kB) Collecting beautifulsoup4==4.11.1 (from -r requirements.txt (line 19)) Using cached https://mirrors.aliyun.com/pypi/packages/9c/d8/909c4089dbe4ade9f9705f143c9f13f065049a9d5e7d34c828aefdd0a97c/beautifulsoup4-4.11.1-py3-none-any.whl (128 kB) Collecting black==22.6.0 (from -r requirements.txt (line 20)) Using cached https://mirrors.aliyun.com/pypi/packages/2b/70/1d0e33a4df4ed73e9f02f698a29b5d94ff58e39f029c939ecf96a10fb1f3/black-22.6.0-py3-none-any.whl (156 kB) Requirement already satisfied: bleach==4.1.0 in e:\anaconda3\lib\site-packages (from -r requirements.txt (line 21)) (4.1.0) 我在jupyter notebook中运行完pip install --no-deps -r requirements.txt后提示如下 怎么解决
06-23
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

豆豆orz

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值