tf.reshape和tf.Tensor.get_shape()

本文详细介绍了如何使用TensorFlow中的tf.reshape函数来调整张量的维度。通过具体实例展示了不同shape参数设置下张量的变化,并解释了在shape中使用-1作为自动计算维度的意义。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

tf.reshape(tensor, shape, name=None) 
  • 第1个参数tensor为被调整维度的张量。
  • 第2个参数shape为要调整为的形状。
  • 返回一个shape形状的新tensor

注意shape里最多有一个维度的值可以填写为-1,表示自动计算此维度。

Reshapes a tensor.
Given tensor, this operation returns a tensor that has the same values as tensor with shape shape.
If shape is the special value {[-1], then tensor is flattened and the operation outputs a 1-D tensor with all elements of tensor.
If shape is 1-D or higher, then the operation returns a tensor with shape shape filled with the values of tensor. In this case, the number of elements implied by shape must be the same as the number of elements in tensor.

1 # tensor ‘t’ is [1, 2, 3, 4, 5, 6, 7, 8, 9]
2 # tensor ‘t’ has shape [9]
3 reshape(t, [3, 3]) ==> [[1, 2, 3]
4 [4, 5, 6]
5 [7, 8, 9]]
6
7 # tensor ‘t’ is [[[1, 1], [2, 2]]
8 # [[3, 3], [4, 4]]]
9 # tensor ‘t’ has shape [2, 2]
10 reshape(t, [2, 4]) ==> [[1, 1, 2, 2]
11 [3, 3, 4, 4]]
12
13 # tensor ‘t’ is [[[1, 1, 1],
14 # [2, 2, 2]],
15 # [[3, 3, 3],
16 # [4, 4, 4]],
17 # [[5, 5, 5],
18 # [6, 6, 6]]]
19 # tensor ‘t’ has shape [3, 2, 3]
20 # pass ‘[¡1]’ to flatten ‘t’
21 reshape(t, [¡1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6,
6, 6]

tf.Tensor.get_shape()
这里写图片描述

按照TensorFlow2.11的写法修改这段代码:“class tgcnCell(RNN): """Temporal Graph Convolutional Network """ def call(self, inputs, **kwargs): pass def __init__(self, num_units, adj, num_nodes, input_size=None, act=tf.nn.tanh, reuse=None): super(tgcnCell, self).__init__(units=num_units,_reuse=reuse) self._act = act self._nodes = num_nodes self._units = num_units self._adj = [] self._adj.append(calculate_laplacian(adj)) @property def state_size(self): return self._nodes * self._units @property def output_size(self): return self._units def __call__(self, inputs, state, scope=None): with tf.variable_scope(scope or "tgcn"): with tf.variable_scope("gates"): value = tf.nn.sigmoid( self._gc(inputs, state, 2 * self._units, bias=1.0, scope=scope)) r, u = tf.split(value=value, num_or_size_splits=2, axis=1) with tf.variable_scope("candidate"): r_state = r * state c = self._act(self._gc(inputs, r_state, self._units, scope=scope)) new_h = u * state + (1 - u) * c return new_h, new_h def _gc(self, inputs, state, output_size, bias=0.0, scope=None): inputs = tf.expand_dims(inputs, 2) state = tf.reshape(state, (-1, self._nodes, self._units)) x_s = tf.concat([inputs, state], axis=2) input_size = x_s.get_shape()[2].value x0 = tf.transpose(x_s, perm=[1, 2, 0]) x0 = tf.reshape(x0, shape=[self._nodes, -1]) scope = tf.get_variable_scope() with tf.variable_scope(scope): for m in self._adj: x1 = tf.sparse_tensor_dense_matmul(m, x0) x = tf.reshape(x1, shape=[self._nodes, input_size,-1]) x = tf.transpose(x,perm=[2,0,1]) x = tf.reshape(x, shape=[-1, input_size]) weights = tf.get_variable( 'weights', [input_size, output_size], initializer=tf.contrib.layers.xavier_initializer()) x = tf.matmul(x, weights) # (batch_size * self._nodes, output_size) biases = tf.get_variable( "biases", [output_size], initializer=tf.constant_initializer(bias, dtype=tf.float32)) x = tf.nn.bias_add(x, biases) x = tf.reshape(x, shape=[-1, self._nodes, output_size]) x = tf.reshape(x, shape=[-1, self._nodes * output_size]) return x”
最新发布
04-05
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值