Tensorflow学习笔记(对MNIST经典例程的)的代码运行

本文探讨了在使用TensorFlow时遇到的“Variable does not exist”错误,并详细分析了其产生的原因。通过调整代码结构,引入显式Graph来避免重复运行时出现的问题,适合初学者参考。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

我的环境是anaconda3,建立的虚拟环境tensorfow,python3.6.2

以下代码各个地方都能找到,以下出处来源于互联网,但运行总是报错,详情如下:

ValueError: Variable layer1/weights does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?

原因见如下链接中的解释:

https://stackoverflow.com/questions/45592118/error-while-running-tensorflow-a-second-time

This is a matter of how TF works. One needs to understand that TF has a "hidden" state - a graph being built. 
Most of the tf functions create ops in this graph (like every tf.Variable call, every arithmetic operation and so on). 
On the other hand actual "execution" happens in the tf.Session(). Consequently your code will usually look like this:

build_graph()

with tf.Session() as sess:
  process_something()

我的理解是tensorflow自动存储了graph相关信息,所以再次运行的时候会提示变量已经存在,解决办法是自己显示定义一个graph?(这样理解是不是对呢,望高手给予指出)

mnist_inference.py

import tensorflow as tf
import sys
import importlib
importlib.reload(sys)


INPUT_NODE = 784     #输入层的节点数 这里等于图片像素
OUTPUT_NODE = 10     #输出层的节点数 分几个类别
LAYER1_NODE = 500

def get_weight_variable(shape, regularizer):
    weights = tf.get_variable("weights", shape, initializer=tf.truncated_normal_initializer(stddev=0.1))
    if regularizer != None:
        tf.add_to_collection('losses', regularizer(weights))
    return weights


def inference(input_tensor, regularizer,reuse=False):
#给定神经网络的所有输入和参数,计算神经网络的前向网络传播结果
    with tf.variable_scope('layer1', reuse = reuse):
        weights = get_weight_variable([INPUT_NODE, LAYER1_NODE], regularizer)
        biases = tf.get_variable("biases", [LAYER1_NODE], initializer=tf.constant_initializer(0.0))
        layer1 = tf.nn.relu(tf.matmul(input_tensor, weights) + biases)

    with tf.variable_scope('layer2', reuse = reuse):
        weights = get_weight_variable([LAYER1_NODE, OUTPUT_NODE], regularizer)
        biases = tf.get_variable("biases", [OUTPUT_NODE], initializer=tf.constant_initializer(0.0))
        layer2 = tf.matmul(layer1, weights) + biases

    return layer2


所以在mnist_train.py中增加了图的显示调用,

with tf.Graph().as_default() as g:

mnist_train.py

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import mnist_inference
import os

BATCH_SIZE = 100                   #一个训练batch中训练数据的个数
LEARNING_RATE_BASE = 0.8           #基础学习率
LEARNING_RATE_DECAY = 0.99         #学习率的衰减率
REGULARIZATION_RATE = 0.0001       #描述模型复杂度的正则化项在损失函数中的系数
TRAINING_STEPS = 30000             #训练轮数
MOVING_AVERAGE_DECAY = 0.99        #滑动平均衰减率
MODEL_SAVE_PATH="MNIST_model/"
MODEL_NAME="mnist_model"


def train(mnist):
    with tf.Graph().as_default() as g:
        x = tf.placeholder(tf.float32, [None, mnist_inference.INPUT_NODE], name='x-input')
        y_ = tf.placeholder(tf.float32, [None, mnist_inference.OUTPUT_NODE], name='y-input')
    
        regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)
        y = mnist_inference.inference(x, regularizer)
        global_step = tf.Variable(0, trainable=False)
    
    
        variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
        variables_averages_op = variable_averages.apply(tf.trainable_variables())
        cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
        cross_entropy_mean = tf.reduce_mean(cross_entropy)
        loss = cross_entropy_mean + tf.add_n(tf.get_collection('losses'))
        learning_rate = tf.train.exponential_decay(
            LEARNING_RATE_BASE,
            global_step,
            mnist.train.num_examples / BATCH_SIZE, LEARNING_RATE_DECAY,
            staircase=True)
        train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
        with tf.control_dependencies([train_step, variables_averages_op]):
            train_op = tf.no_op(name='train')
    
    
        saver = tf.train.Saver()
        with tf.Session() as sess:
            tf.global_variables_initializer().run()
    
            for i in range(TRAINING_STEPS):
                xs, ys = mnist.train.next_batch(BATCH_SIZE)
                _, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: xs, y_: ys})
                if i % 1000 == 0:
                    print("After %d training step(s), loss on training batch is %g." % (step, loss_value))
                    saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step)


def main(argv=None):
    mnist = input_data.read_data_sets("../../../datasets/MNIST_data", one_hot=True)
    train(mnist)

if __name__ == '__main__':
    tf.app.run()


以上为学习笔记,本人tensorflow小白一枚,若理解有误,请大家指出

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值