第一次完整跟在线课程
在paddle公众号等到了疫情后的第一期课程深度学习7日入门CV疫情,在studio经过7天紧张的学习,终于完成了课程,等着班主任小姐姐给快递结业证书。非常感谢多位老师带来的精彩讲解,在这写下这次课程中主要的一些感想和卷积网络的一些调参优化经历。
主要心酸
1跨专业确实难。有一定的python基础,有看过一些神经网络的视频,有跑过几个基础的模型,但在过程中还是会有很多问号,有时候甚至不知道应该从何问起。但随着课程一步步推进,看群里的聊天,再看老师的讲解,有很多问题像春天里的冰块般消失了,收获不断。
2时间是最大的敌人。这次课程很紧凑,7天,基本是每天直播隔天交作业,连每次作业都有排行榜,但对我这样白天要上班晚上还有家庭事务的人来说,能抽空在studio跑下代码就很不容易了,经常还被各种问题卡住,最后勉强能在时间截止前把作业交了。
3没有经验的设想都是不靠谱。都在说调参是玄学,对的,对没有经验的我来说调参调模型就是玄学,我总觉得手势这种不求细节的图应该是用大卷积,而人脸识别更精细应该用小卷积核,但学会VGG之后发现比我之前所有设想和模型都是在异想天开。
Day01-新冠疫情可视化
第一天作业还是简单的,爬取数据,显示出来,这个我有基础。
爬取数据:
import json
import re
import requests
import datetime
today = datetime.date.today().strftime('%Y%m%d') #20200315
def crawl_dxy_data():
"""
爬取丁香园实时统计数据,保存到data目录下,以当前日期作为文件名,存JSON文件
"""
response = requests.get('丁香园实时数据链接 ') #request.get()用于请求目标网站
print(response.status_code) # 打印状态码
try:
url_text = response.content.decode() #更推荐使用response.content.deocde()的方式获取响应的html页面
#print(url_text)
url_content = re.search(r'window.getAreaStat = (.*?)}]}catch', #re.search():扫描字符串以查找正则表达式模式产生匹配项的第一个位置 ,然后返回相应的match对象。
url_text, re.S) #在字符串a中,包含换行符\n,在这种情况下:如果不使用re.S参数,则只在每一行内进行匹配,如果一行没有,就换下一行重新开始;
#而使用re.S参数以后,正则表达式会将这个字符串作为一个整体,在整体中进行匹配。
texts = url_content.group() #获取匹配正则表达式的整体结果
content = texts.replace('window.getAreaStat = ', '').replace('}catch', '') #去除多余的字符
json_data = json.loads(content)
with open('data/' + today + '.json', 'w', encoding='UTF-8') as f:
json.dump(json_data, f, ensure_ascii=False)
except:
print('<Response [%s]>' % response.status_code)
def crawl_statistics_data():
"""
获取各个省份历史统计数据,保存到data目录下,存JSON文件
"""
with open('data/'+ today + '.json', 'r', encoding='UTF-8') as file:
json_array = json.loads(file.read())
statistics_data = {}
for province in json_array:
response = requests.get(province['statisticsData'])
try:
statistics_data[province['provinceShortName']] = json.loads(response.content.decode())['data']
except:
print('<Response [%s]> for url: [%s]' % (response.status_code, province['statisticsData']))
with open("data/statistics_data.json", "w", encoding='UTF-8') as f:
json.dump(statistics_data, f, ensure_ascii=False)
if __name__ == '__main__':
crawl_dxy_data()
crawl_statistics_data()
图示代码:
from pyecharts.charts import Pie
c = (
Pie(init_opts=opts.InitOpts(width="700px", height="900px")) #设置画布的大小
.add("", china_data, #数据输入
radius="50%") # 50%的图大小
.set_global_opts(title_opts=opts.TitleOpts(title="3月31日全国累计确诊数据"),
legend_opts=opts.LegendOpts(type_="scroll", pos_left="90%", orient="vertical")) #图例位置
.set_series_opts(label_opts=opts.LabelOpts(formatter="{b}: {c}"))
#.radius(60,80)
)
c.render_notebook() #显示
Day02-手势识别
这是我跑的最多的一个作业,开始不知道怎么优化然后找各种方式,在群里看大家的讨论,然后各种问小白问题,好不容易赶在作业截止前上交了0.86的成绩。
交完之后几天继续用新学的VGG网络跑,然后自己简化VGG网络跑到了0.958
一开始DNN网络,速度快,不那么准
#定义DNN网络
class Mynet(fluid.dygraph.Layer):
def __init__(self):
super(Mynet,self).__init__()
self.hidden1 = Linear(100, 100, act = 'relu')
self.hidden2 = Linear(100, 100, act = 'relu')
self.hidden3 = Linear(100, 100, act = 'relu')
self.hidden4 = Linear(3*100*100, 10, act = 'softmax')
def forward(self,input):
x = self.hidden1(input)
x = self.hidden2(x)
x = self.hidden3(x)
x = fluid.layers.reshape(x, shape = [-1,3*100*100])
y = self.hidden4(x)
return y
然后尝试学着改卷积网络,然后改学习参数,跑到了0.86。这里发现
SGD超级慢,学习率得上调至0.3-0.5才会比较好的下降。用Adam不收敛,而Adagrad比较理想,收敛很平稳。
class Mynet(fluid.dygraph.Layer):
def __init__(self):
super(Mynet,self).__init__()
self.hidden1 = Conv2D(num_channels=3, num_filters=64, filter_size=5, stride=2, padding=2, act='relu') #50
self.hidden2 = Pool2D(pool_size= 3, pool_type='max', pool_stride= 2)# 24
self.hidden3 = Conv2D(num_channels=64, num_filters=128, filter_size=5, stride=1,padding=0, act='relu') #20
self.hidden4 = Pool2D(pool_size= 2, pool_type='max', pool_stride= 2) #10
self.hidden5 = Conv2D(num_channels=128, num_filters=64, filter_size=5, stride=1,padding=0, act='relu') #6
self.fc1 = Linear(input_dim=64*6*6, output_dim=10, act = 'softmax')
# 定义网络前向计算过程,卷积后紧接着使用池化层,最后使用全连接层计算最终输出
def forward(self, inputs):
x = self.hidden1(inputs)
x = self.hidden2(x)
#print(x.shape)
x = self.hidden3(x)
x = self.hidden4(x)
#print(x.shape)
x = self.hidden5(x)
#print(x.shape)
x = fluid.layers.reshape(x, shape = [-1, 64*6*6])
x = self.fc1(x)
return x
学习了VGG网络之后再来尝试,完整的VGG-16效果不好,于是进行各种简化,最后用了VGG-16的结构减少了很多层(如下)效果不错,使用的Adagrad,学习率0.001,batch_size 32,40个epochs,成绩0.9375,50个epochs,成绩0.9598214。
lass Mynet(fluid.dygraph.Layer):
def __init__(self):
super(Mynet,self).__init__()
self.conv1 = Conv2D(num_channels=3, num_filters=32, filter_size=5, stride=1, padding=2, act='relu') #100
self.pool1 = Pool2D(pool_size= 2, pool_type='max', pool_stride= 2)# 50
self.conv2 = Conv2D(num_channels=32, num_filters=64, filter_size=5, stride=1, padding=2, act='relu') #50
self.pool2 = Pool2D(pool_size= 2, pool_type='max', pool_stride= 2)# 25
self.conv3 = Conv2D(num_channels=64, num_filters=128, filter_size=5, stride=1, padding=2, act='relu') #25
self.pool3 = Pool2D(pool_size= 2, pool_type='max', pool_stride= 2)# 12
self.conv4 = Conv2D(num_channels=128, num_filters=256, filter_size=5, stride=1, padding=2, act='relu') #12
self.pool4 = Pool2D(pool_size= 2, pool_type='max', pool_stride= 2)# 6
self.fc1 = Linear(256*6*6, 4096, act='relu')
self.fc2 = Linear(4096, 4096, act='relu')
self.fc3 = Linear(4096, 10, act = 'softmax')
# 定义网络前向计算过程,卷积后紧接着使用池化层,最后使用全连接层计算最终输出
def forward(self, inputs):
x = self.conv1(inputs)
x = self.pool1(x)
#print(x.shape)
x = self.conv2(x)
x = self.pool2(x)
#print(x.shape)
x = self.conv3(x)
x = self.pool3(x)
#print(x.shape)
x = self.conv4(x)
x = self.pool4(x)
#print(x.shape)
x = fluid.layers.reshape(x, shape = [-1, 256*6*6])
x = self.fc1(x)
x = self.fc2(x)
x = self.fc3(x)
return x
Day03-车牌识别
这里学习了LeNet,比较简单的一个卷积网络,让我感受到卷积网络的魅力,拿老师给的标准代码直接跑通了模型,成绩直接到了0.964,满意的交卷之后拿着LeNet兴奋的去跑手势识别,结果一盆冷水效果很一般1
可以简写的代码:
#定义网络
class MyLeNet(fluid.dygraph.Layer):
def __init__(self):
super(MyLeNet,self).__init__()
self.hidden1_1 = Conv2D(1, 64, 5, 1 )# 28
self.hidden1_2 = Pool2D(pool_size= 2, pool_type='max', pool_stride= 1) #27
self.hidden2_1 = Conv2D(64, 64, 3, 1) #25
self.hidden2_2 = Pool2D(pool_size= 2, pool_type='max', pool_stride= 1) #24
self.hidden3 = Conv2D(64, 32, 3 ,1) #22
self.hidden4 = Linear(32*10*10,65, act='softmax')
def forward(self,input):
x = self.hidden1_1(input)
x = self.hidden1_2(x)
x = self.hidden2_1(x)
x = self.hidden2_2(x)
x = self.hidden3(x)
x = fluid.layers.reshape(x,shape = [-1,32*10*10])
y = self.hidden4(x)
return y
Day04-口罩分类
这里学习了VGG模型,实现了VGG-16的代码。还用上了集中参数字典,感觉很高大上,实际跑的时候发现调参数反而没有在代码块中直观。
深层网络结构,于是尝试了dropout防止过拟合,不过跑了几次发现不如提早结束效果来的好,学会看训练集的ACC可以提早结束,防止过拟合。
VGG-16这个结构:
VGG-16实现代码:
简写的参数,所以开头贴了个参数好认
# num_channels, num_filters, filter_size, pool_size, pool_stride, groups, pool_padding=0, pool_type='max', conv_stride=1, conv_padding=1, act=None 参数
#定义网络
class VGGNet(fluid.dygraph.Layer):
'''
VGG网络
'''
def __init__(self):
super(VGGNet, self).__init__()
self.convpool1 = ConvPool( 3, 64, 3, 2, 2, 2, act='relu') #8 64 112 112
self.convpool2 = ConvPool( 64, 128, 3, 2, 2, 2, act='relu') #8 128 56 56
self.convpool3 = ConvPool(128, 256, 3, 2, 2, 3, act='relu') #8 256 28 28
self.convpool4 = ConvPool(256, 512, 3, 2, 2, 3, act='relu') #8 512 14 14
self.convpool5 = ConvPool(512, 512, 3, 2, 2, 3, act='relu') #8 512 7 7
self.linear1 = Linear(512*7*7, 4096, act='relu')
#self.drop_ratio = 0.5
self.linear2 = Linear(4096, 4096, act='relu')
self.linear3 = Linear(4096, 2, act='softmax')
def forward(self, inputs, label=None):
x = self.convpool1(inputs)
#pirnt(x.shap)
x = self.convpool2(x)
#pirnt(x.shap)
x = self.convpool3(x)
#pirnt(x.shap)
x = self.convpool4(x)
#pirnt(x.shap)
x = self.convpool5(x)
#pirnt(x.shap)
x = fluid.layers.reshape(x,shape = [-1,512*7*7])
x = self.linear1(x)
#x= fluid.layers.dropout(x, self.drop_ratio)
x = self.linear2(x)
#x= fluid.layers.dropout(x, self.drop_ratio)
out = self.linear3(x)
if label is not None:
acc = fluid.layers.accuracy(input =out, label = label)
return out, acc
else:
return out
尝试了几种num_epochs:
num_epochs 30,结果就0.6上下,过拟合了
num_epochs 10,结果就0.87上下,欠缺一点火候
num_epochs 20,结果1.0刚刚好
Day06-PaddleSlim模型压缩
这个PaddleSlim很强大很好用,我也是在这里知道了还有PaddleHub、PaddleX等各种工具,都很好很强大!
我到这里才知道要去学学PaddleHub了,有各种预训练模型,也许能直接解决我的实际问题。
最后是比赛——人流密度检测
我没太搞懂这个卷积网络的回归问题是怎么个模型在里面,拿基础网络结构改了改跑了跑,由于时间关系以及studio人太多的客观因素只跑通了两回,排名100/438截个图留个纪念吧。
我改了的cnn网络:
class CNN(fluid.dygraph.Layer):
'''
网络
'''
def __init__(self):
super(CNN, self).__init__()
self.conv01_1 = fluid.dygraph.Conv2D(num_channels=3, num_filters=64,filter_size=3,padding=1,act="relu") #640,480
self.conv01_2 = fluid.dygraph.Conv2D(num_channels=64, num_filters=64,filter_size=3,padding=1,act="relu") #640,480
self.pool01=fluid.dygraph.Pool2D(pool_size=2,pool_type='max',pool_stride=2)#320*240
self.conv02_1 = fluid.dygraph.Conv2D(num_channels=64, num_filters=128,filter_size=3, padding=1,act="relu")
self.conv02_2 = fluid.dygraph.Conv2D(num_channels=128, num_filters=128,filter_size=3, padding=1,act="relu")
self.pool02=fluid.dygraph.Pool2D(pool_size=2,pool_type='max',pool_stride=2)#160*120
self.conv03_1 = fluid.dygraph.Conv2D(num_channels=128, num_filters=256,filter_size=3, padding=1,act="relu")
self.conv03_2 = fluid.dygraph.Conv2D(num_channels=256, num_filters=256,filter_size=3, padding=1,act="relu")
self.conv03_3 = fluid.dygraph.Conv2D(num_channels=256, num_filters=256,filter_size=3, padding=1,act="relu")
self.pool03=fluid.dygraph.Pool2D(pool_size=2,pool_type='max',pool_stride=2)#80*60
self.conv04_1 = fluid.dygraph.Conv2D(num_channels=256, num_filters=512,filter_size=3, padding=1,act="relu")
self.conv04_2 = fluid.dygraph.Conv2D(num_channels=512, num_filters=512,filter_size=3, padding=1,act="relu")
self.conv05_1 = fluid.dygraph.Conv2D(num_channels=512, num_filters=512,filter_size=3,padding=1, act="relu")
self.conv05_2 = fluid.dygraph.Conv2D(num_channels=512, num_filters=512,filter_size=3,padding=1, act="relu")
self.conv06 = fluid.dygraph.Conv2D(num_channels=512,num_filters=256,filter_size=3,padding=1,act='relu')
self.conv07 = fluid.dygraph.Conv2D(num_channels=256,num_filters=128,filter_size=3,padding=1,act='relu')
self.conv08 = fluid.dygraph.Conv2D(num_channels=128,num_filters=64,filter_size=3,padding=1,act='relu')
self.conv09 = fluid.dygraph.Conv2D(num_channels=64,num_filters=1,filter_size=1,padding=0,act=None)
def forward(self, inputs, label=None):
"""前向计算"""
out = self.conv01_1(inputs)
out = self.conv01_2(out)
out = self.pool01(out)
out = self.conv02_1(out)
out = self.conv02_2(out)
out = self.pool02(out)
out = self.conv03_1(out)
out = self.conv03_2(out)
out = self.conv03_3(out)
out = self.pool03(out)
out = self.conv04_1(out)
out = self.conv04_2(out)
out = self.conv05_1(out)
out = self.conv05_2(out)
out = self.conv06(out)
out = self.conv07(out)
out = self.conv08(out)
out = self.conv09(out)
return out
什么奖都没有的我找到PaddleHub
对于跨专业的我来说找到怎么运用模型可能更重要,对所有的模型有个更深入的了解肯定会对实际使用更有的帮助,但是需要一些时间,这么多抽奖和打榜将都没有我,所以我还是重点放在PaddleHub模型使用和神奇的PaddleX上吧。