Tensorflow separates definition of computations from their execution
Phase 1: assemble a graph.
- Step 1: Read in data
# data input script
- Step 2: Create placeholders for inputs and labels
tf.placeholder(dtype, shape=None, name=None)
- Step 3:Create weight an bias
tf.Variable(initial_value=None, trainable=True, collections=None, name=None, dtype=None,...)
- Step 4: Build model to predict Y
# tensorflow operation
- Step 5:Specify loss function
tf.nn.softmax_cross_entropy_with_logits()
- Step 6: Create optimizer
tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss)
Phase 2: ues a session to execute operations in the graph.
- Step 1: Initialize variables
tf.global_variables_initializer()
tf.summary.FileWriter('./graphs', sess.graph)
- Step 2: Run optimizer op(with data fed into placeholders for inputs and labels)
sess.run(op, feed_dict={X:X_batch, Y:Y_batch})
本文介绍如何使用TensorFlow构建神经网络模型,并详细解释从数据读取到模型训练的全过程。首先组装计算图,包括创建占位符、变量等步骤;接着利用会话执行图中的操作,实现模型训练。
828

被折叠的 条评论
为什么被折叠?



