from bs4 import BeautifulSoup import re html_doc = """ <html><head><title>The Dormouse's story</title></head> <body> <p class="title"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there werree three little sisters; and their names were <a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>, <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> """ soup=BeautifulSoup(html_doc,'html.parser') print("all link:") links=soup.find_all('a') for link in links: print(link.name,link['href'],link.get_text()) print("lacie's link:") link_node=soup.find('a',href="http://example.com/lacie") print(link_node.name,link_node['href'],link_node.get_text()) print("正则表达:") link_node=soup.find('a',href=re.compile(r"ill")) print(link_node.name,link_node['href'],link_node.get_text()) print("获取P段落文字") P_node=soup.find('p',class_="title") print(P_node.name,P_node.get_text())运行结果:
all link: a http://example.com/elsie Elsie a http://example.com/lacie Lacie a http://example.com/tillie Tillie lacie's link: a http://example.com/lacie Lacie 正则表达: a http://example.com/tillie Tillie 获取P段落文字 p The Dormouse's story
776

被折叠的 条评论
为什么被折叠?



