tf.Graph.name_scope

本文介绍TensorFlow中如何使用tf.Graph.name_scope方法来管理操作的命名层级。通过具体代码示例展示了如何创建不同层级的命名空间,并说明了这些命名空间如何影响操作的名称。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

tf.Graph.name_scope(name)

返回一个上下文管理器,为操作创建一个层级的name

一个图维持一个name scope的栈。name_scope(...):声明将一个新的name放入context生命周期的一个栈中。

name参数如下:

  • 一个字符串(不以'/' 结尾)将穿件一个新scope name,在这个scope中,上下文中所创建的操作前都加上name这个前缀。如果name之前用过,将调用self.unique_name(name) 确定调用一个唯一的name
  • 使用g.name_scope(...)捕获之前的scope作为scope:声明将被视为一个“全局”的scope name,这允许重新进入一个已经存在的scope
  • None或空字符串将会重置当前的scope name 为最高级(空)的name scope
#!/user/bin/env python
# coding=utf-8

import tensorflow as tf

with tf.Graph().as_default() as g:
    c=tf.constant(5.0,name="c")
    assert c.op.name=="c"
    c_1=tf.constant(6.0,name="c")
    assert c_1.op.name=="c_1"

    #creates a scop called "nested"
    with g.name_scope("nested") as scope:
        nested_c=tf.constant(10.0,name="c")
        assert nested_c.op.name=="nested/c"
        
        #creates a nested scope called "inner"
        with g.name_scope("inner"):
            nested_inner_c=tf.constant(20.0,name="c")
            assert nested_inner_c.op.name=="nested/inner/c"

        #creates a nested scope called "inner_1"
        with g.name_scope("inner"):
            nested_inner_1_c=tf.constant(30.0,name="c")
            assert nested_inner_1_c.op.name=="nested/inner_1/c"

            #treats 'scope' as an absolute name scope, and 
            #switches to the "nested/" scope.
            with g.name_scope(scope):
                nested_d=tf.constant(40.0,name="d")
                assert nested_d.op.name=="nested/d"

                with g.name_scope(""):
                    e=tf.constant(50.0,name="e")
                    assert e.op.name=="e"
scope自身的name可以被g.name_scope(...)捕获作为一个scope:,把scope name存储到变量scope中。这个值可以用于命名一个操作的最终结果,例如:
inputs=tf.constant(...)
with g.name_scope('my_layer') as scope:
    weights=tf.Variable(...,name="weights")
    biases=tf.Variable(...,name="biases")
    affine=tf.matmul(inputs,weights)+biases
    output=tf.nn.relu(affine,name=scope)






















按照TensorFlow2.11的写法修改这段代码:“class tgcnCell(RNN): """Temporal Graph Convolutional Network """ def call(self, inputs, **kwargs): pass def __init__(self, num_units, adj, num_nodes, input_size=None, act=tf.nn.tanh, reuse=None): super(tgcnCell, self).__init__(units=num_units,_reuse=reuse) self._act = act self._nodes = num_nodes self._units = num_units self._adj = [] self._adj.append(calculate_laplacian(adj)) @property def state_size(self): return self._nodes * self._units @property def output_size(self): return self._units def __call__(self, inputs, state, scope=None): with tf.variable_scope(scope or "tgcn"): with tf.variable_scope("gates"): value = tf.nn.sigmoid( self._gc(inputs, state, 2 * self._units, bias=1.0, scope=scope)) r, u = tf.split(value=value, num_or_size_splits=2, axis=1) with tf.variable_scope("candidate"): r_state = r * state c = self._act(self._gc(inputs, r_state, self._units, scope=scope)) new_h = u * state + (1 - u) * c return new_h, new_h def _gc(self, inputs, state, output_size, bias=0.0, scope=None): inputs = tf.expand_dims(inputs, 2) state = tf.reshape(state, (-1, self._nodes, self._units)) x_s = tf.concat([inputs, state], axis=2) input_size = x_s.get_shape()[2].value x0 = tf.transpose(x_s, perm=[1, 2, 0]) x0 = tf.reshape(x0, shape=[self._nodes, -1]) scope = tf.get_variable_scope() with tf.variable_scope(scope): for m in self._adj: x1 = tf.sparse_tensor_dense_matmul(m, x0) x = tf.reshape(x1, shape=[self._nodes, input_size,-1]) x = tf.transpose(x,perm=[2,0,1]) x = tf.reshape(x, shape=[-1, input_size]) weights = tf.get_variable( 'weights', [input_size, output_size], initializer=tf.contrib.layers.xavier_initializer()) x = tf.matmul(x, weights) # (batch_size * self._nodes, output_size) biases = tf.get_variable( "biases", [output_size], initializer=tf.constant_initializer(bias, dtype=tf.float32)) x = tf.nn.bias_add(x, biases) x = tf.reshape(x, shape=[-1, self._nodes, output_size]) x = tf.reshape(x, shape=[-1, self._nodes * output_size]) return x”
最新发布
04-05
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值