能改变维度的函数:
- reshape
- reducemean
- tf.tf.newaxis
- tf.expand_dims
a = tf.constant([[[1, 2, 3, 4, 5, 6],
[1, 2, 3, 4, 5, 6],
[1, 2, 3, 4, 5, 6],
[1, 2, 3, 4, 5, 6]],
[[1, 2, 3, 4, 5, 6],
[2, 3, 4, 5, 6, 6],
[6, 7, 6, 9, 1, 6],
[1, 2, 3, 4, 5, 6]],
[[1, 2, 3, 4, 5, 6],
[2, 3, 4, 5, 6, 6],
[6, 7, 6, 9, 1, 6],
[1, 2, 3, 4, 5, 6]]])
print(a.shape)
// reshape 中出现-1,tf会自动推断出相应得维度
x = tf.reshape(a, (a.shape[0], 4, 2, 3))
print(x)
tf.reduce_mean(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)
Computes the mean of elements across dimensions of a tensor.
Reduces input_tensor along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.If axis has no entries, all dimensions are reduced, and a tensor with a single element is returned.
import numpy as np
import tensorflow as tf
x = np.array([[1.,2.,3.],[4.,5.,6.]])
sess = tf.Session()
mean_none = sess.run(tf.reduce_mean(x))
mean_0 = sess.run(tf.reduce_mean(x, 0))
mean_1 = sess.run(tf.reduce_mean(x, 1))
print (x)
print (mean_none)
print (mean_0)
print (mean_1)
sess.close()
x=
[[ 1. 2. 3.]
[ 4. 5. 6.]]
mean_none=3.5
mean_0=[ 2.5 3.5 4.5]
mean_1=[ 2. 5.]
There is no real difference between the three, but sometimes one or the other may be more convenient:
What’s the difference between using tf.expand_dims and tf.newaxis in Tensorflow?