爬取整个网页
from urllib2 import urlopen
html = urlopen('网页URL')
print(html.read())
使用BeautifulSoup获取网页指定标签
from urllib2 import urlopen
from bs4 import BeautifulSoup
html = urlopen('网页URL')
bs = BeautifulSoup(html.read(), 'html.parser')
print(bs.h1) #打印该网页的H1标签
异常处理
from urllib2 import urlopen, HTTPError
try:
html = urlopen('网页URL')
except HTTPError as e: #捕获HTTP异常(404,500)
print(e)
except URLError as e: #捕获服务器异常(Server not found)
print(e)
else:
print(html.read())
BeautifulSoup的find()和findAll()
findAll(tag , attributes , recursive , text , limit , keywords)
find(tag , attributes , recursive , text , keywords)
例:
| .findAll(['h1' , 'h2' , 'h3']) | 查找所有H1 , H2 , H3 标签 |
| .findAll('span',{'class':{'green' , 'red'}}) | 查找所有class为green或red的span标签 |
| abcList = bs.findAll(text='abc') print(len(abcList)) | 查找'abc'一共出现过几次 |
| .findAll(id='title',class_='text') | 查找所有class为text,id为title的标签 |
| .findAll(id='title') == .findAll(' ' , {'id':'title'}) | |
|
.findAll(class_='red') == .findAll(' ' , {'class':'red'}) |

本文介绍了Python爬虫的入门知识,包括如何爬取整个网页,使用BeautifulSoup解析HTML获取特定标签,以及进行异常处理。详细讲解了BeautifulSoup的find()和findAll()方法,帮助初学者掌握网页数据抓取的基本技巧。
8192





