网贷之家中的p2p平台数据比较容易获取,重要的就是如何分析网页的源代码然后从里面提取自己需要的信息,也不需要用户登录,该网站的爬虫比较简单,主要用了urllib包来获取网页信息,用BeautifulSoup来解析网页,最后用正则表达式提取数据。这里就直接上源码了:
# -*- coding: utf-8 -*-
"""
Created on Wed Aug 8 18:22:26 2018
@author: 95647
"""
import urllib
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
import pandas as pd
headers = {
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:23.0) Gecko/20100101 Firefox/23.0'}
lists =[]
domains = "https://www.wdzj.com"
def get_platform_site(url,lists):
"""获取所有的平台网址"""
# global lists
req = urllib.request.Request(url, headers=headers)
html = urlopen(req)
bsObj = BeautifulSoup(html,'lxml')
title = bsObj.findAll("div",{
'class':'itemTitle'})
for titles in title:
links = titles.findAll("a",{
'target':'_blank'})
for link in links:
if 'href' in link.attrs:
lists.append(link.attrs['href'])
# print(link.attrs['href'])
return lists #用utf-8进行解码
def pages_num(url):
"""获取各类平台的页面总数"""
req = urllib.request.Request(url, headers=headers)
html = urlopen(req)
bsObj = BeautifulSoup(html,'lxml'