转自TensorFlow官网
tf.dynamic_partition(
data,
partitions,
num_partitions,
name=None
)
Partitions data into num_partitions tensors using indices from partitions.
使用分区中的索引将数据分区到num_partitions张量中。
For each index tuple js of size partitions.ndim, the slice data[js, ...] becomes part of outputs[partitions[js]]. The slices with partitions[js] = i are placed in outputs[i] in lexicographic order of js, and the first dimension of outputs[i] is the number of entries in partitionsequal to i. In detail,
outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]
outputs[i] = pack([data[js, ...] for js if partitions[js] == i])
For example:
# Scalar partitions.
partitions = 1
num_partitions = 2
data = [10, 20]
outputs[0] = [] # Empty with shape [0, 2]
outputs[1] = [[10, 20]]
# Vector partitions.
partitions = [0, 0, 1, 1, 0]
num_partitions = 2
data = [10, 20, 30, 40, 50]
outputs[0] = [10, 20, 50]
outputs[1] = [30, 40]
本文详细解析了TensorFlow中tf.dynamic_partition函数的工作原理,该函数根据分区索引将输入数据划分到多个输出张量中。文章通过具体示例展示了如何使用此函数进行数据分割,并解释了输出张量形状的计算方式。
3万+

被折叠的 条评论
为什么被折叠?



