整体流程:
1.数据抓取;
2.数据清洗;
3.建模及优化;
4.业务意义;
5.反思。
一、数据抓取
环境:python3.7
from parsel import Selector
import requests
import time
lines=[ ]
for i in range (1,3): #先测试,再抓取总数
base_url='https://tj.lianjia.com/ershoufang/pg%s/'
url=base_url % i
content=requests.get(url)
time.sleep(2) #休眠2秒,防止操作太频繁
sel=Selector(text=content.text)
for x in sel.css('.info.clear'): #Chrome开发者工具,查询路径
title=x.css('a::text').extract_first()
community=x.css('.houseInfo>a::text').extract_first()
address=x.css('.address>::text').getall()
flood=x.css('.flood>::text').getall()
totalPrice=x.css('.totalPrice>span::text').extract()
lines.append('%s,天津%s,%s,%s,%sW' % (title,community,address,flood,totalPrice))
#print("lines",lines)
with open('tianjin_datas.csv','w') as f:
for line in lines:
f.write (line)
f.write('\n')
二、数据清洗
将抓取的48359条数据,进行重复值、缺失值、异常值、字符类型转变、数值转换等处理,为建模做准备:
清洗前:
清洗代码:
import pandas as pd
import numpy as np #引入,修改数值类型
#读取数据
lianjia