





建议用xpath不用beatifulsoup




第二种手写xpath



完整代码
import requests
from lxml import etree
url ='https://movie.douban.com/subject/6874741/comments?status=P'
r = requests.get(url).text
s=etree.HTML(r)
#print(s.xpath('//*[@id="comments"]/div/div[2]/p/text()')) #浏览器复制
print(s.xpath('//div[@class="comment"]/p/text()')) #手写
答题中心


解析:在所有元素中找,只有class='name'这个类的元素都给找出来
//*[@id="paper"]/a/div[2]/text() b答案错在[@id="paper"]
简书当页所有书面代码
import requests
from lxml import etree
url ='http://www.jianshu.com/publications'
r = requests.get(url).text
s=etree.HTML(r)
#print(s.xpath('//*[@id="comments"]/div/div[2]/p/text()')) #浏览器复制
print(s.xpath(' //*[@class="name"]/text()')) #手写

糗百所有博主昵称完整代码
import requests
from lxml import etree
url ='https://www.qiushibaike.com/text/'
r = requests.get(url).text
s=etree.HTML(r)
#print(s.xpath('//*[@id="comments"]/div/div[2]/p/text()')) #浏览器复制
print(s.xpath('//*[@id="content-left"]/div/div[1]/a[2]/h2/text()')) #手写
运行结果示例:



图片链接代码
import requests
from lxml import etree
url ='https://www.jianshu.com/publications#paper'
r = requests.get(url).text
s=etree.HTML(r)
#print(s.xpath('//*[@id="comments"]/div/div[2]/p/text()')) #浏览器复制
print(s.xpath('//*[@id="paper"]/a[1]/div[1]/img/@src')) #手写

