在前面的学习中,我们学会了如何通过tensorflow添加新层,具体代码如下:
import tensorflow as tf
import numpy as np
def add_layer(inputs, input_size, out_size, activation_function == None):
Weights = tf.Variable(tf.random_normal([input_size,
out_size]))
biases = tf.Variable(tf.zeros([1, out_size]))
Wx_plus_b = tf.matmul(inputs,Weights) + biases
if activation_function == None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
return outputs
上段代码定义了增加层的方法,下面产生数据模拟这一过程。
x_data = np.linspace(-1, 1, 300)[:, np.newaxis]
noise = np.random_normal(0, 0.005, x_data)
y_data = np.square(x_data)-0.5+noise
xs = tf.placeholder(tf.float32, [None, 1])
ys = tf.placeholder(tf.float32,[None, 1])
构建一个两层的神经网络。
lay1 = add_layer(xs, 1, 10, activation_function = tf.nn.relu)
prediction = add_layer(layer1, 10, 1, activation_function = None)
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduce_indices=[1]))
下面是训练:
optimizer = tf.train.GradientDescentOptimizer(0.1)
train_step = optimizer.minimize(loss)
init = tf.initialize_all_variables()
接下来就是激活框架:
with tf.Session() as sess:
sess.run(init)
for i in range(1000):
sess.run(train_step, feed_dict={xs: x_data, ys:
y_data})
if i % 50 == 0:
print(sess.run(i, sess.run(loss, feed_dict={xs:
x_data, ys: y_data})))
完整的过程就是这样咯。
本文介绍了使用TensorFlow构建两层神经网络的过程,包括添加新层、数据模拟、损失函数定义及训练过程。通过实例演示了如何实现神经网络的训练。

被折叠的 条评论
为什么被折叠?



