文章列表
1.自己编写深度学习框架ANNbox【高仿tensorflow】__01实现全连接.
2.自己编写深度学习框架ANNbox【高仿tensorflow】__02实现不同的优化方法.
3.自己编写深度学习框架ANNbox【高仿tensorflow】__03文本情感分析.
4.自己编写深度学习框架ANNbox【高仿tensorflow】__04卷积神经网络编写:实现AlexNet.
5.自己编写深度学习框架ANNbox【高仿tensorflow】__05循环神经网络编写:实现RNN(01).
…
[TOC]
自己编写深度学习框架ANNbox【高仿tensorflow】__01实现MLP
为什么要自己动手编写神经网络呢?原因有2个:
1.前几天学习权重及偏执初始化对神经网络的影响时,尝试观察每次迭代过程中的系数变化。但发现在训练迭代过程中使用sess.run()只能得到权重的初始值,训练更新的权重是看不到的,只能通过tensorboard可视化权重的方式实现中间变量的观察(tf.histogram_summary(layer_name + ‘/weights’, Weights)),于是我就想如果我自己编写的话就让程序在迭代的过程中可以观察到中间变量。
2.很久以前也手动编过全连接神经网络,当时是按照以单个层为类进行编写的【详参深度学习基础模型算法原理及编程实现系列文章】,但后来通过学习发现MiniFlow【详参深度学习基础模型算法原理及编程实现–11.构建简化版的tensorflow—MiniFlow】,觉得很厉害,主要的启发点如下:
(a).拓扑排序:利用Kahn算法将有向无环图排列成为一个线性有序序列,从而实现程序自身自动进行前向计算及后向计算
(b).类的拆分更彻底:以线性加权、激活函数变换、损失函数等单个操作为类进行编写,
这两点使得我可以编出移植性更好以及操作更傻瓜式的程序,就想tensorflow一样,所以我又从新整理了一下思路,尝试自己编写神经网络库ANNbox,ANNbox的使用方式趋近于tensorflow的使用方式,唯一的区别是,你不需在电脑上安装tensorflow,并且对于原来在tensorflow框架下运行的代码,只需将其中的import tensorflow as tf改为import ANNbox as tf即可运行【目前还需要再改几项】。在看本文之前请先看下【深度学习基础模型算法原理及编程实现–11.构建简化版的tensorflow—MiniFlow】及【深度学习基础模型算法原理及编程实现–03.全链接】以便对本文内容有足够的知识准备。
在本小节的学习结束之后,我们通过2层MLP实现MNIST数据集分类,只需要activator.py、ANNbox.py、base.py、optimization.py这4个文件,就可以实现对一段与tensorflow构架下极其相似的程序的编译,运行结果对比如下【几乎】,在最终的测试集上准确率达到95%【可惜ANNbox计算耗时较长,后面慢慢改进】:


从上面4个图可以看出,在cpu上运行的情况下,我自己编的ANNbox库函数运行时间大概是tensorflow运行时间的1.3-1.5倍左右,应该有很大可以改进的地方【欢迎大家交流】
1.1 线性加权输入类 Linear
class Linear(Node):
'''线性加权神经元类,实现节点加权输入'''
def __init__(self, X, W, b, name=[]):
Node.__init__(self, inbound_nodes=[X, W, b], name=name)
def forward(self):
X = self.inbound_nodes[0].value
W = self.inbound_nodes[1].value
b = self.inbound_nodes[2].value
self.value = np.dot(X, W) + b
def backward(self):
self.gradients = {n: np.zeros_like(n.value) for n in self.inbound_nodes}
# self.gradients = {}
# self.gradients[self.inbound_nodes[0]] = np.zeros_like(self.inbound_nodes[0].value)
# self.gradients[self.inbound_nodes[1]] = np.zeros_like(self.inbound_nodes[1].value)
# self.gradients[self.inbound_nodes[2]] = np.zeros_like(self.inbound_nodes[2].value) #axis=0表示对列求和
for n in self.outbound_nodes:
grad_cost = n.gradients[self]
self.gradients[self.inbound_nodes[0]] += np.dot(grad_cost, self.inbound_nodes[1].value.T)
self.gradients[self.inbound_nodes[1]] += np.dot(self.inbound_nodes[0].value.T, grad_cost)
self.gradients[self.inbound_nodes[2]] += np.sum(grad_cost, axis=0, keepdims=False) #axis=0表示对列求和
1.2 矩阵相乘类matmul

class matmul(Node):
'''矩阵相乘神经元类,实现节点加权输入'''
def __init__(self, X, W, name=[]):
Node.__init__(self, inbound_nodes=[X,W], name=name)
def forward(self):
X = self.inbound_nodes[0].value
W = self.inbound_nodes[1].value
self.value = np.dot(X, W)
def backward(self):
self.gradients = {n: np.zeros_like(n.value) for n in self.inbound_nodes}
for n in self.outbound_nodes:
grad_cost = n.gradients[self]
self.gradients[self.inbound_nodes[0]] += np.dot(grad_cost, self.inbound_nodes[1].value.T)
self.gradients[self.inbound_nodes[1]] += np.dot(self.inbound_nodes[0].value.T, grad_cost)
1.3 矩阵相加类Add【用于运算符重载】

上图中详细的符号定义及代码推导请参考【深度学习基础模型算法原理及编程实现–03.全链接】。相应的代码实现如下:
class Add(Node):
'''线性加权神经元类,实现节点加权输入'''
def __init__(self, Y, b, name=[]):
Node.__init__(self, inbound_nodes=[Y, b], name=name)
def forward(self):
Y = self.inbound_nodes[0].value
b = self.inbound_nodes[1].value
self.value = Y + b
def backward(self):
self.gradients = {n: np.zeros_like(n.value) for n in self.inbound_nodes}
for n in self.outbound_nodes:
grad_cost = n.gradients[self]
self.gradients[self.inbound_nodes[0]] += grad_cost #axis=0表示对列求和
self.gradients[self.inbound_nodes[1]] += np.sum(grad_cost, axis=0, keepdims=False) #axis=0表示对列求和
1.4 矩阵相减类Sub【用于运算符重载】
class Sub(Node):
'''线性加权神经元类,实现节点加权输入'''
def __init__(self, Y, label, name=[]):
Node.__init__(self, inbound_nodes=[Y, label], name=name)
def forward(self):
Y = self.inbound_nodes[0].value
label = self.inbound_nodes[1].value
self.value = Y - label
def backward(self):
self.gradients = {n: np.zeros_like(n.value) for n in self.inbound_nodes}
for n in self.outbound_nodes:
grad_cost = n.gradients[self]
self.gradients[self.inbound_nodes[0]] += grad_cost
self.gradients[self.inbound_nodes[1]] -= grad_cost
1.5 平方运算Squared类
class Squared(Node):
'''线性加权神经元类,实现节点加权输入'''
def __init__(self, o, name=[]):
Node.__init__(self, inbound_nodes=[o], name=name)
def forward(self):
self.value = o**2
def backward(self):
self.gradients = {n: np.zeros_like(n.value) for n in self.inbound_nodes}
for n in self.outbound_nodes:
grad_cost = n.gradients[self]
self.gradients[self.inbound_nodes[0]] += 2*o*grad_cost
1.6 reduce_mean类
def list_product(input_):
o=1
for i in input_:
o*=i
return o
class reduce_mean (Node):
'''线性加权神经元类,实现节点加权输入'''
def __init__(self, o, name=[]):
Node.__init__(self, inbound_nodes=[o], name=name)
def forward(self):
self.value=np.mean(self.inbound_nodes[0].value)
def backward(self):
self.gradients = {n: np.zeros_like(n.value) for n in self.inbound_nodes}
if(self.outbound_nodes==[]):
grad_cost = np.ones_like(self.inbound_nodes[0])
grad_cost /= list_product (self.inbound_nodes[0].value.shape)
# self.gradients[self.inbound_nodes[0]] = np.ones_like(self.inbound_nodes[0])*grad_cost/list_product (grad_cost.shape)
self.gradients[self.inbound_nodes[0]] = np.ones_like(self.inbound_nodes[0])*grad_cost/list_product (grad_cost.shape)/list_product (grad_cost.shape)
else:
for n in self.outbound_nodes:
grad_cost = n.gradients[self]
# self.gradients[self.inbound_nodes[0]] += np.ones_like(self.inbound_nodes[0])*grad_cost/list_product (grad_cost.shape)
self.gradients[self.inbound_nodes[0]] += np.ones_like(self.inbound_nodes[0])*grad_cost/list_product (grad_cost.shape) /list_product (grad_cost.shape)
#不明白为什么还要再除以一遍list_product (grad_cost.shape),但这样的结果与tensorflow运行的结果一致
1.7 Node类添加运算符重载
1.7.1 加号
class Node(object):
…
def __add__(self,other):
return Add(self,other)
1.7.2 减号
class Node(object):
…
def __sub__(self,other):
return Sub(self,other)
1.8 激活函数类

上图中详细的符号定义及代码推导请参考【深度学习基础模型算法原理及编程实现–03.全链接】。
1.8.1 激活函数为sigmoid

相应的实现代码如下:
class Sigmoid(Node):
'''sigmoid激活函数节点类型,实现对节点加输入的非线性变换'''
def __init__(self, node, name=[]):
Node.__init__(self, inbound_nodes=[node], name=name)
def _sigmoid(self, x):
return (1./(1+np.exp(-x)))
def forward(self):
self.value=self._sigmoid(self.inbound_nodes[0].value)
def backward(self):
self.gradients = {n: np.zeros_like(n.value) for n in self.inbound_nodes}
for n in self.outbound_nodes:
grad_cost = n.gradients[self]
sigmoid = self.value
self.gradients[self.inbound_nodes[0]] += sigmoid * (1 - sigmoid) * grad_cost
1.8.2 激活函数为ReLU

相应的实现代码如下:
class ReLU(Node):
'''激活函数节点类型,实现对节点加输入的非线性变换'''
def __init__(self, node, name=[]):
Node.__init__(self, inbound_nodes=[node], name=name)
def _ReLU(self, x):
return np.array([list(map(lambda x:x if(x>0.0) else 0.0, row)) for row in x])
def _ReLU_Derivative(self, x):
return np.array([list(map(lambda x:1.0 if(x>0.0) else 0.0, row)) for row in x])
def forward(self):
self.value=self._ReLU(self.inbound_nodes[0].value)
def backward(self):
self.gradients = {n: np.zeros_like(n.value) for n in self.inbound_nodes}
for n in self.outbound_nodes:
grad_cost = n.gradients[self]
relu = self._ReLU_Derivative(self.inbound_nodes[0].value)
self.gradients[self.inbound_nodes[0]] += relu * grad_cost
# print('grad_ReLU',self.gradients[self.inbound_nodes[0]][0])
1.9 哪些项是用来传递梯度项的

1.10 损失函数类
1.10.1 softmax_cross_entropy_with_logits损失函数类
softmax_cross_entropy_with_logits损失函数可进一步分解为softmax操作及交叉熵损失函数,下面分别对其过程进行描述。
1.10.1.1 softmax操作

上图中详细的符号定义及代码推导请参考【深度学习基础模型算法原理及编程实现–03.全链接】。相应的实现代码如下:
class softmax(Node):
'''softmax 是 Node 的另一个子类,实现对softmax操作'''
def __init__(self, x, name=[]):
Node.__init__(self, inbound_nodes=[x], name=name) # 调用Node的构造函数
self.is_eff = True
# def forward(self):
# logits = self.inbound_nodes[0].value
# logits = np.exp(logits - np.max(logits,1).reshape(-1,1))
# self.value=np.zeros_like(logits)
# for i in range(logits.shape[0]):
# logsum = np.sum(logits[i,:])
# self.value[i,:] = logits[i,:]/logsum
def forward(self):
logits = self.inbound_nodes[0].value
logits=np.exp(logits - np.max(logits,1,keepdims=1))
logsum=np.sum(logits,1,keepdims=1)
self.value=logits/logsum
def backward(self):
self.gradients = {n: np.zeros_like(n.value) for n in self.inbound_nodes}
for n in self.outbound_nodes:
grad_cost = n.gradients[self]
self.gradients[self.inbound_nodes[0]] += self.value*(1. - self.value)*grad_cost
1.10.1.2 交叉熵损失函数

上图中详细的符号定义及代码推导请参考【深度学习基础模型算法原理及编程实现–03.全链接】。相应的实现代码如下:
class cross_entropy_with_logits(Node):
'''cross_entropy_with_logits 是 Node 的另一个子类,实现对交叉熵损失函数'''
def __init__(self, labels, logits, name=[]):
Node.__init__(self, inbound_nodes=[labels, logits], name=name) # 调用Node的构造函数
self.is_eff = True
# def forward(self):
# labels=self.inbound_nodes[0].value
# logits=self.inbound_nodes[1].value
# self.m = labels.shape[0]
# self.value = np.zeros(labels.shape[0])
# self.diff = np.zeros(labels.shape)
# for i in range(self.m):
# ind = np.argmax(labels[i,:])
# self.value[i] = -1.*np.log(logits[i][ind]+1e-5)-np.sum(np.log(1-logits[i][:]+1e-5))+np.log(1-logits[i][ind]+1e-5)
# self.diff[i,:] = 1/(1-logits[i])
# self.diff[i,ind] = -1/logits[i][ind]
def forward(self):
labels=self.inbound_nodes[0].value
logits=self.inbound_nodes[1].value
self.m = labels.shape[0]
self.value = np.zeros(labels.shape[0])
self.diff = np.zeros(labels.shape)
self.value=np.sum(-labels*np.log(logits)-(1-labels)*np.log(1-logits),1)
self.diff=(logits-labels)/(logits*(1-logits))
def backward(self):
self.gradients = {}
self.gradients[self.inbound_nodes[1]]=self.diff
1.10.1.3 softmax_cross_entropy_with_logits损失函数

上图中详细的符号定义及代码推导请参考【深度学习基础模型算法原理及编程实现–03.全链接】。相应的实现代码如下:
class softmax_cross_entropy_with_logits(Node):
def __init__(self, labels, logits, name = []):
"""softmax交叉熵损失函数"""
Node.__init__(self, inbound_nodes=[labels, logits], name=name)
self.is_eff = True
def softmax(self,logits):
logits = np.exp(logits)
for i in range(logits.shape[0]):
logits[i,:] = logits[i,:]/np.sum(logits[i,:])
return logits
# def forward(self):
# logits = self.inbound_nodes[1].value
# logits = np.exp(logits - np.max(logits,1).reshape(-1,1))
# logits_ = np.zeros(logits.shape)
# self.m = self.inbound_nodes[0].value.shape[0]
# self.value = np.zeros(self.inbound_nodes[0].value.shape[0])
# self.diff = np.zeros(self.inbound_nodes[0].value.shape)
# for i in range(logits.shape[0]):
# logsum = np.sum(logits[i,:])
# logits_[i,:] = logits[i,:]/(logsum+1e-5)
# ind = np.argmax(self.inbound_nodes[0].value[i])
# self.value[i] = -1.*np.log(logits_[i][ind]+1e-5)-np.sum(np.log(1-logits_[i][:]+1e-5))+np.log(1-logits_[i][ind]+1e-5)
# self.diff[i,:] = logits_[i]-self.inbound_nodes[0].value[i]
def forward(self):
labels=self.inbound_nodes[0].value
logits=self.inbound_nodes[1].value
logits=np.exp(logits - np.max(logits,1,keepdims=1))
logits_=np.zeros(logits.shape)
self.m=self.inbound_nodes[0].value.shape[0]
self.value=np.zeros(labels.shape[0])
logsum=np.sum(logits,1,keepdims=1)
logits_=logits/logsum
self.value=np.sum(-labels*np.log(logits_)-(1-labels)*np.log(1-logits_),1)
self.diff=logits_-self.inbound_nodes[0].value
def backward(self):
self.gradients = {}
self.gradients[self.inbound_nodes[1]]=self.diff)
1.10.2 均方误差损失函数类
1.10.2.1 实现方式01
与tensorflow一样,即tf.reduce_mean(tf.squared(y_ - y)),reduce_mean及squared的定义如前文所示。
1.10.2.2 实现方式02

上图中详细的符号定义及代码推导请参考【深度学习基础模型算法原理及编程实现–03.全链接】。相应的实现代码如下:
class MSE(Node):
def __init__(self, labels, logits, name = []):
"""均方误差损失函数"""
Node.__init__(self, inbound_nodes=[labels, logits], name=name)
def forward(self):
labels = self.inbound_nodes[0].value
logits = self.inbound_nodes[1].value
self.m = self.inbound_nodes[0].value.shape[0]
self.diff = labels - logits
# self.value = np.mean(self.diff**2,1)
self.value = np.sum(self.diff**2,1)
def backward(self):
self.gradients = {}
# self.gradients[self.inbound_nodes[0]]=2./self.m*self.diff
self.gradients[self.inbound_nodes[1]]=-2./self.m*self.diff
1.11 编程实现
1.11.1 验证算例
为了对比分别调用tensorflow和ANNbox库函数计算结果的差别,分别用同一段程序调用tensorflow库及ANNbox库函数对比计算结果。详细代码文件在:中,beyond compare对比两个版本的计算程序如下:

01.其中第一个差异是调用库函数的差异:
import tensorflow as tf
import ANNbox as tf
02.第二个差异是tensorflow中需建立会话sess = tf.InteractiveSession(),而ANNbox则无需建立【其实我还没有弄明白会话的作用,说以在ANNbox中并没有建立此功能,但并不影响最终的结果】
03.第三个差异是全局变量初始化,我在ANNbox中用tf.global_variables_initializer().run()来实现。
04.第4个差异是值评估函数的差异。Tensorflow中用的函数名是eval,而ANNbox中用的是my_eval
最终的运行结果如下:

从过程记录文件中的数据对比分别调用ANNbox库函数及tensorflow库函数的程序在测试数据集上的准确率:
ANNbox,acc_test_list:[0.1993, 0.3639, 0.5266, 0.6176, 0.6968, 0.7338, 0.7534, 0.7698, 0.7836, 0.8034,…]
tensorflow,acc_test_list:[0.1993, 0.3639, 0.5266, 0.6176, 0.6968, 0.7338, 0.7534, 0.7698, 0.7836, 0.8034,…]
发现结果是一致的。所以证明了ANNbox库函数的有效性。
1.8.2 算例1
上面的程序只有一层隐藏层,这里我们使用两层隐藏层,且第一层隐层的激活函数relu,第二层为全等映射,且损失函数为softmax_cross_entropy_with_logits,在测试数据集上的准确率达到95%。两个版本程序的差别也就是上一算例中列出的4类差别。
1.8.2.1 ANNbox版本
from tensorflow.examples.tutorials.mnist import input_data
import ANNbox as tf
import matplotlib.pyplot as plt
import numpy as np
import sys
import time
#loaddata
mnist = input_data.read_data_sets("./MNISTDat", one_hot=True)
#sess = tf.InteractiveSession()
# Create the model
in_units = 784
h1_units = 300
o_units = 10
W1=tf.Variable(tf.truncated_normal([in_units, h1_units], stddev=0.1))
b1=tf.Variable(tf.zeros([h1_units]))
x=tf.placeholder(tf.float32,[None,in_units])
hidden1=tf.nn.relu(tf.matmul(x,W1)+b1)
W2=tf.Variable(tf.zeros([h1_units, o_units]))
b2=tf.Variable(tf.zeros([o_units]))
y=tf.matmul(hidden1,W2)+b2
y_ = tf.placeholder(tf.float32, [None, o_units])
#cross_entropy = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y, name='cost') #loss = tf.reduce_mean(cost)
#cross_entropy = tf.nn.cross_entropy_with_logits(labels=y_, logits=tf.nn.softmax(y), name='cost')
epochs = 2
m = 50000
batch_size = 64*2*2
learning_rate=2e-4
Momentum_rate=0.9
steps_per_epoch = m // batch_size
#train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
train_step = tf.train.MomentumOptimizer(learning_rate,Momentum_rate).minimize(cross_entropy)
#train_step = tf.train.AdagradOptimizer(0.3).minimize(cross_entropy)
start_time=time.time()
#sess=tf.Session()
tf.global_variables_initializer().run()
#sess.run(tf.global_variables_initializer())
loss_list,acc_train_list,acc_test_list=[],[],[]
for _ in range(epochs):
for i in range(steps_per_epoch):
# batch_xs=mnist.train.images[i*batch_size:(i+1)*batch_size]
# batch_ys=mnist.train.labels[i*batch_size:(i+1)*batch_size]
batch_xs,batch_ys=mnist.train.next_batch(batch_size)
'''train'''
train_step.run({x: batch_xs, y_: batch_ys})
loss_list.append(np.mean(cross_entropy.my_eval({x: batch_xs, y_: batch_ys})))
acc_train_list.append(np.mean((np.argmax(y.my_eval({x: batch_xs, y_: batch_ys}),1) == np.argmax(batch_ys,1)).astype(int)))
acc_test_list.append(np.mean((np.argmax(y.my_eval({x: mnist.test.images, y_: mnist.test.labels}),1) == np.argmax(mnist.test.labels,1)).astype(int)))
sys.stdout.write("\rprocess: {}/{}, loss:{:.5f}, acc_train:{:.2f}, acc_test:{:.2f}".format(i, steps_per_epoch, loss_list[-1], acc_train_list[-1], acc_test_list[-1]))
plt.figure()
# plt.subplot(211)
plt.plot(range(len(loss_list)),loss_list,label=u'loss')
# plt.subplot(212)
plt.plot(range(len(loss_list)),acc_train_list,label=u'acc_train')
plt.plot(range(len(loss_list)),acc_test_list,label=u'acc_test')
plt.ylim([0,1])
plt.title('ANNbox')
plt.legend()
plt.show()
end_time = time.time()
print('total_time:',end_time-start_time)
1.8.2.2 Tensorflow版本
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import sys
import time
#loaddata
mnist = input_data.read_data_sets("./MNISTDat", one_hot=True)
sess = tf.InteractiveSession()
# Create the model
in_units = 784
h1_units = 300
o_units = 10
W1 = tf.Variable(tf.truncated_normal([in_units, h1_units], stddev=0.1))
b1 = tf.Variable(tf.zeros([h1_units]))
x = tf.placeholder(tf.float32, [None, in_units])
hidden1=tf.nn.relu(tf.matmul(x,W1)+b1)
W2=tf.Variable(tf.zeros([h1_units, o_units]))
b2=tf.Variable(tf.zeros([o_units]))
y=tf.matmul(hidden1,W2)+b2
y_ = tf.placeholder(tf.float32, [None, o_units])
#cross_entropy = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
cross_entropy=tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)
epochs = 2
m = 50000
batch_size = 64*2*2
learning_rate=2e-4
Momentum_rate=0.9
steps_per_epoch = m // batch_size
#train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
train_step = tf.train.MomentumOptimizer(learning_rate,Momentum_rate).minimize(cross_entropy)
#train_step = tf.train.AdagradOptimizer(0.3).minimize(cross_entropy)
start_time=time.time()
sess=tf.Session()
tf.global_variables_initializer().run()
sess.run(tf.global_variables_initializer())
loss_list,acc_train_list,acc_test_list=[],[],[]
for _ in range(epochs):
for i in range(steps_per_epoch):
# batch_xs=mnist.train.images[i*batch_size:(i+1)*batch_size]
# batch_ys=mnist.train.labels[i*batch_size:(i+1)*batch_size]
batch_xs,batch_ys=mnist.train.next_batch(batch_size)
'''train'''
train_step.run({x: batch_xs, y_: batch_ys})
loss_list.append(np.mean(cross_entropy.eval({x: batch_xs, y_: batch_ys})))
acc_train_list.append(np.mean((np.argmax(y.eval({x: batch_xs, y_: batch_ys}),1) == np.argmax(batch_ys,1)).astype(int)))
acc_test_list.append(np.mean((np.argmax(y.eval({x: mnist.test.images, y_: mnist.test.labels}),1) == np.argmax(mnist.test.labels,1)).astype(int)))
sys.stdout.write("\rprocess: {}/{}, loss:{:.5f}, acc_train:{:.2f}, acc_test:{:.2f}".format(i, steps_per_epoch, loss_list[-1], acc_train_list[-1], acc_test_list[-1]))
plt.figure()
# plt.subplot(211)
plt.plot(range(len(loss_list)),loss_list,label=u'loss')
# plt.subplot(212)
plt.plot(range(len(loss_list)),acc_train_list,label=u'acc_train')
plt.plot(range(len(loss_list)),acc_test_list,label=u'acc_test')
plt.ylim([0,1])
plt.title('tensorflow')
plt.title('ANNbox')
plt.legend()
plt.show()
end_time = time.time()
print('total_time:',end_time-start_time)