【转载】使用tf.py_func函数增加Tensorflow程序的灵活性

本文深入解析TensorFlow中TF.py_func接口的使用方法及原理,通过实例展示如何利用该接口提高程序灵活性,解决未知tensor维度及tensor值判断等问题。

转自:https://blog.youkuaiyun.com/jiongnima/article/details/80555387

目录

                  tf.py_func函数接口

                  tf.py_func在Faster R-CNN中的接口中的使用

使用tf.py_func获得未知tensor维度

在tf.py_func中对tensor的值作出判断


       不知不觉,笔者接触Tensorflow也满一年了。在这一年当中,笔者对Tensorflow的了解程度也逐渐加深。相比笔者接触的第一个深度学习框架Caffe而言,笔者认为Tensorflow更适合科研一些,网络搭建与算法设置的自由度也更大,使用Tensorflow实现自己的算法也更迅速。

        但是,笔者认为Tensorflow还是有不足的地方。第一体现在Tensorflow的数据机制,由于tensor只是占位符,在没有用tf.Session().run接口填充值之前是没有实际值的。因此,在网络搭建的时候,是不能对tensor进行判值操作的,即不能插入if...else...之类的代码。第二,相较于numpy array,Tensorflow中对tensor的操作接口灵活性并没有那么高,使得Tensorflow的灵活性减弱。

       在笔者使用Tensorflow的一年中积累的编程经验来看,扩展Tensorflow程序的灵活性,有一个重要的手段,就是使用tf.py_func接口。笔者先对这个接口做出解析:

tf.py_func函数接口

py_func(
    func,
    inp,
    Tout,
    stateful=True,
    name=None
) 

        

        在上图中,我们看到,tf.py_func的核心是一个func函数(由用户自己定义),该函数接收numpy array作为输入,并返回numpy array类型的输出。看到这里,大家应该能够明白为什么建议使用py_func因为在func函数中,可以对转化成numpy array的tensor进行np.运算,这就大大扩展了程序的灵活性。

   然后,我们来看看tf.py_func接受什么参数:

在使用tf.py_func的过程中,主要核心是使用前三个参数。

  • 第一个参数func,也是最重要的,是一个用户自定制的函数,输入numpy array,输出也是numpy array,在该函数中,可以自由使用np.操作。
  • 第二个参数inp,是func函数接收的输入,是一个列表。
  • 第三个参数Tout,指定了func函数返回的numpy array转化成tensor后的格式,如果是返回多个值,就是一个列表或元组;如果只有一个返回值,就是一个单独的dtype类型(当然也可以用列表括起来)。

 

最后来看看tf.py_func的输出:

输出是一个tensor列表或单个tensor。

到这里,tf.py_func的原理也就逐渐明晰了。首先,tf.py_func接收的是tensor,然后将其转化为numpy array送入func函数,最后再将func函数输出的numpy array转化为tensor返回。

在使用过程中,有两个需要注意的地方,第一就是func函数的返回值类型一定要和Tout指定的tensor类型一致。第二就是,如下图所示,tf.py_func中的func是脱离Graph的。在func中不能定义可训练的参数参与网络训练(反响传播)。

上面就解析了tf.py_func的使用方法和原理。下面笔者举几个例子,一是向大家展示tf.py_func带来的灵活性,二是通过笔者的亲身体会说明一下如何使用tf.py_func完成一些Tensorflow基础编程中较难的任务。

1) tf.py_func在Faster R-CNN中的接口中的使用

在目标检测算法Faster R-CNN中,需要计算各种ground truth,接口比较复杂。因此,使用tf.py_func是一个比较好的途径。对于tf.py_func的使用,可以参见计算RPN的ground truth和计算proposals的ground truth时的使用方法。可以看到,都是将tensor转化成numpy array,再使用np.操作完成复杂运算。

下面笔者来举两个小例子,说明一下tf.py_func的强大功能。

2) 使用tf.py_func获得未知tensor维度

大家知道,我们在做数据占位的时候,可能会传入"None",即不知道数据的该维大小,取决于feed_dict中的实际值。可是,在运算中,要使用到数据的该维大小时应该怎么办呢?比如下面这个例子:

import tensorflow as tf
import numpy as np
 
def main():
    a = tf.placeholder(tf.float32, shape=[1, 2], name = "tensor_a")
    b = tf.placeholder(tf.float32, shape=[None, 2], name = "tensor_b")
    tile_a = tf.tile(a, [b.get_shape()[0], 1])
    sess = tf.Session()
    array_a = np.array([[1., 2.]])
    array_b = np.array([[3., 4.],[5., 6.],[7., 8.]])
    feed_dict = {a: array_a, b: array_b}
    tile_a_value = sess.run(tile_a, feed_dict = feed_dict)
    print(tile_a_value)
 
if __name__ == '__main__':
    main()

如上代码所示,要完成一个很简单的功能,就是扩张tensor a,将其的维度变成和tensor b一样,可是tensor b的维度暂时未知。我们来看看,执行上述程序能得到什么结果:

 可以看到,由于tensor b第一个维度未知,因此在给tile_a分配存储空间时报错,提示不能有None存在。

 如何解决这个问题?稍微改写一下上述代码,让tensor扩张在tf.py_func中执行:

import tensorflow as tf
import numpy as np
from py_func_1 import *
 
def main():
    a = tf.placeholder(tf.float32, shape=[1, 2], name = "tensor_a")
    b = tf.placeholder(tf.float32, shape=[None, 2], name = "tensor_b")
    tile_a = tile_tensor(a, b)
    sess = tf.Session()
    array_a = np.array([[1., 2.]])
    array_b = np.array([[3., 4.],[5., 6.],[7., 8.]])
    feed_dict = {a: array_a, b: array_b}
    tile_a_value = sess.run(tile_a, feed_dict = feed_dict)
    print(tile_a_value)
 
if __name__ == '__main__':
    main()

在上面的代码中,tensor扩张在tile_tensor这个函数中执行。该函数定义在py_func_1.py文件中,下面是py_func_1.py的代码:

import tensorflow as tf
import numpy as np
 
def tile_tensor(tensor_a, tensor_b):
    tile_tensor_a = tf.py_func(_tile_tensor, [tensor_a, tensor_b], tf.float32)
    return tile_tensor_a
 
def _tile_tensor(a, b):
    tile_a = np.tile(a, (b.shape[0], 1))
    return tile_a

大家可以看到,使用了tf.py_func接口,参数func就是_tile_tensor函数。在_tile_tensor函数中,将a扩张了,执行一下修改后的main函数,输出结果:

大家可以看到,在tile_tensor函数中,tensor a在tensor b的维度未知的情况下,根据tensor b的实际维度([3, 2])将其扩张了。并返回了一个tensor类型的tile_a。

3) 在tf.py_func中对tensor的值作出判断

笔者在之前的博客中提到过,在tf.Session().run之前,是不能对Tensor的值做出判断的。比如,我们想根据tensor a的值对tensor b做出扩张:

import tensorflow as tf
import numpy as np
 
def main():
    a = tf.placeholder(tf.float32, shape=[1], name = "tensor_a")
    b = tf.placeholder(tf.float32, shape=[1, 2], name = "tensor_b")
    tile_b = b
    if a[0]==1.:
        tile_b = tf.tile(b, [4, 1])
    sess = tf.Session()
    array_a = np.array([1.])
    array_b = np.array([[2., 3.]])
    feed_dict = {a: array_a, b: array_b}
    tile_b_value = sess.run(tile_b, feed_dict = feed_dict)
    print(tile_b_value)
 
if __name__ == '__main__':
    main()

如果a[0]的值为1.0,那么就将tensor b扩张四倍。我们执行一下上述代码看看结果:

大家可以看到,由于在if语句执行时,tensor a里面是空的。因此,不会执行if中的语句。尽管在feed_dict中a被填充了1.0,并且程序不报错,可是没有达到预想的目标。

如何解决这个问题?稍微改写一下上述代码,让判值进行tensor扩张在tf.py_func中执行:

import tensorflow as tf
import numpy as np
from py_func_2 import *
 
def main():
    a = tf.placeholder(tf.float32, shape=[1], name = "tensor_a")
    b = tf.placeholder(tf.float32, shape=[1, 2], name = "tensor_b")
    tile_tensor_b = tile_b(a, b)
    sess = tf.Session()
    array_a = np.array([1.])
    array_b = np.array([[2., 3.]])
    feed_dict = {a: array_a, b: array_b}
    tile_b_value = sess.run(tile_tensor_b, feed_dict = feed_dict)
    print(tile_b_value)
 
if __name__ == '__main__':
    main()

大家可以看到,在py_func_2.py中的tile_b函数中,对tensor b进行了判值扩张。py_func_2.py代码如下所示:

import tensorflow as tf
import numpy as np
 
def tile_b(tensor_a, tensor_b):
    tile_tensor_b = tf.py_func(_tile_b, [tensor_a, tensor_b], tf.float32)
    return tile_tensor_b
 
def _tile_b(a, b):
    if a[0]==1.:
        tile_b = np.tile(b, (4, 1))
    else:
        tile_b = b
    return tile_b

大家可以看到,在tile_b函数中有一个tf.py_func函数,其中的func参数便是_tile_b函数。在_tile_b函数中,根据a的值对b进行了扩张。我们来运行一下main函数,输出结果:

tensor b得到了扩张!

大家可以看到,在tensor输入进tf.py_func并转化成numpy array后,判值操作就有效了。

通过上面的三个例子,笔者向大家揭示了tf.py_func函数中的神奇之处。大家可以看到,在实际使用中,将tensor转化为numpy array后,能够执行更灵活的操作,达到更多的目标。总而言之,tf.py_func是一个很强大的接口,也希望大家能在Tensorflow程序中灵活运用。

 

补充:

调用tf.py_func的缺点有:

  • 这个被包装过的的计算函数的内部部分不会被序列化到 GraphDef 里面去,所以,如果你要序列化存储和恢复模型,就不能使用该函数。
  • 这个被包装的计算节点 OP 与调用它的 Python 程序必须运行在同一个物理设备上,也就是说,如果使用分布式TensorFlow,必须使用 tf.train.Server 和 with tf.device(): 来保证二者在同一个服务器内。
(randlanet) root@autodl-container-5fe44bab80-cdf86d9c:~/autodl-tmp/RandLA-Net-master# python main_SemanticKITTI.py --mode train --gpu 0 Initiating input pipelines 2025-07-29 11:34:16.502498: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected Traceback (most recent call last): File "main_SemanticKITTI.py", line 206, in <module> dataset.init_input_pipeline() File "main_SemanticKITTI.py", line 176, in init_input_pipeline self.batch_train_data = self.batch_train_data.map(map_func=map_func) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1861, in map return MapDataset(self, map_func, preserve_cardinality=True) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4985, in __init__ use_legacy_function=use_legacy_function) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4218, in __init__ self._function = fn_factory() File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3151, in get_concrete_function *args, **kwargs) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3116, in _get_concrete_function_garbage_collected graph_function, _ = self._maybe_define_function(args, kwargs) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3463, in _maybe_define_function graph_function = self._create_graph_function(args, kwargs) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3308, in _create_graph_function capture_by_value=self._capture_by_value), File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 1007, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4195, in wrapped_fn ret = wrapper_helper(*args) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4125, in wrapper_helper ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 695, in wrapper raise e.ag_error_metadata.to_exception(e) TypeError: in user code: main_SemanticKITTI.py:142 tf_map * neighbour_idx = tf.compat.v1(DP.knn_search, [batch_pc, batch_pc, cfg.k_n], tf.int32) TypeError: 'module' object is not callable 该怎么解决这个问题
07-30
这是报错信息:Traceback (most recent call last): File "main_SemanticKITTI.py", line 206, in <module> dataset.init_input_pipeline() File "main_SemanticKITTI.py", line 176, in init_input_pipeline self.batch_train_data = self.batch_train_data.map(map_func=map_func) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1861, in map return MapDataset(self, map_func, preserve_cardinality=True) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4985, in __init__ use_legacy_function=use_legacy_function) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4218, in __init__ self._function = fn_factory() File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3151, in get_concrete_function *args, **kwargs) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3116, in _get_concrete_function_garbage_collected graph_function, _ = self._maybe_define_function(args, kwargs) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3463, in _maybe_define_function graph_function = self._create_graph_function(args, kwargs) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3308, in _create_graph_function capture_by_value=self._capture_by_value), File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 1007, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4195, in wrapped_fn ret = wrapper_helper(*args) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4125, in wrapper_helper ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 695, in wrapper raise e.ag_error_metadata.to_exception(e) AttributeError: in user code: main_SemanticKITTI.py:145 tf_map * up_i = tf.py_func(DP.knn_search, [sub_points, batch_pc, 1], tf.int32) AttributeError: module 'tensorflow' has no attribute 'py_func' 这是代码:from helper_tool import DataProcessing as DP from helper_tool import ConfigSemanticKITTI as cfg from helper_tool import Plot from os.path import join from RandLANet import Network from tester_SemanticKITTI import ModelTester import tensorflow as tf import numpy as np import os, argparse, pickle class SemanticKITTI: def __init__(self, test_id): self.name = 'SemanticKITTI' self.dataset_path = '/root/autodl-tmp/RandLA-Net-master/data/semantic_kitti/dataset/sequences_0.06' self.label_to_names = {0: 'unlabeled', 1: 'car', 2: 'bicycle', 3: 'motorcycle', 4: 'truck', 5: 'other-vehicle', 6: 'person', 7: 'bicyclist', 8: 'motorcyclist', 9: 'road', 10: 'parking', 11: 'sidewalk', 12: 'other-ground', 13: 'building', 14: 'fence', 15: 'vegetation', 16: 'trunk', 17: 'terrain', 18: 'pole', 19: 'traffic-sign'} self.num_classes = len(self.label_to_names) self.label_values = np.sort([k for k, v in self.label_to_names.items()]) self.label_to_idx = {l: i for i, l in enumerate(self.label_values)} self.ignored_labels = np.sort([0]) self.val_split = '08' self.seq_list = np.sort(os.listdir(self.dataset_path)) self.test_scan_number = str(test_id) self.train_list, self.val_list, self.test_list = DP.get_file_list(self.dataset_path, self.test_scan_number) self.train_list = DP.shuffle_list(self.train_list) self.val_list = DP.shuffle_list(self.val_list) self.possibility = [] self.min_possibility = [] # Generate the input data flow def get_batch_gen(self, split): if split == 'training': num_per_epoch = int(len(self.train_list) / cfg.batch_size) * cfg.batch_size path_list = self.train_list elif split == 'validation': num_per_epoch = int(len(self.val_list) / cfg.val_batch_size) * cfg.val_batch_size cfg.val_steps = int(len(self.val_list) / cfg.batch_size) path_list = self.val_list elif split == 'test': num_per_epoch = int(len(self.test_list) / cfg.val_batch_size) * cfg.val_batch_size * 4 path_list = self.test_list for test_file_name in path_list: points = np.load(test_file_name) self.possibility += [np.random.rand(points.shape[0]) * 1e-3] self.min_possibility += [float(np.min(self.possibility[-1]))] def spatially_regular_gen(): # Generator loop for i in range(num_per_epoch): if split != 'test': cloud_ind = i pc_path = path_list[cloud_ind] pc, tree, labels = self.get_data(pc_path) # crop a small point cloud pick_idx = np.random.choice(len(pc), 1) selected_pc, selected_labels, selected_idx = self.crop_pc(pc, labels, tree, pick_idx) else: cloud_ind = int(np.argmin(self.min_possibility)) pick_idx = np.argmin(self.possibility[cloud_ind]) pc_path = path_list[cloud_ind] pc, tree, labels = self.get_data(pc_path) selected_pc, selected_labels, selected_idx = self.crop_pc(pc, labels, tree, pick_idx) # update the possibility of the selected pc dists = np.sum(np.square((selected_pc - pc[pick_idx]).astype(np.float32)), axis=1) delta = np.square(1 - dists / np.max(dists)) self.possibility[cloud_ind][selected_idx] += delta self.min_possibility[cloud_ind] = np.min(self.possibility[cloud_ind]) if True: yield (selected_pc.astype(np.float32), selected_labels.astype(np.int32), selected_idx.astype(np.int32), np.array([cloud_ind], dtype=np.int32)) gen_func = spatially_regular_gen gen_types = (tf.float32, tf.int32, tf.int32, tf.int32) gen_shapes = ([None, 3], [None], [None], [None]) return gen_func, gen_types, gen_shapes def get_data(self, file_path): seq_id = file_path.split('/')[-3] frame_id = file_path.split('/')[-1][:-4] kd_tree_path = join(self.dataset_path, seq_id, 'KDTree', frame_id + '.pkl') # Read pkl with search tree with open(kd_tree_path, 'rb') as f: search_tree = pickle.load(f) points = np.array(search_tree.data, copy=False) # Load labels if int(seq_id) >= 11: labels = np.zeros(np.shape(points)[0], dtype=np.uint8) else: label_path = join(self.dataset_path, seq_id, 'labels', frame_id + '.npy') labels = np.squeeze(np.load(label_path)) return points, search_tree, labels @staticmethod def crop_pc(points, labels, search_tree, pick_idx): # crop a fixed size point cloud for training center_point = points[pick_idx, :].reshape(1, -1) select_idx = search_tree.query(center_point, k=cfg.num_points)[1][0] select_idx = DP.shuffle_idx(select_idx) select_points = points[select_idx] select_labels = labels[select_idx] return select_points, select_labels, select_idx @staticmethod def get_tf_mapping2(): def tf_map(batch_pc, batch_label, batch_pc_idx, batch_cloud_idx): features = batch_pc input_points = [] input_neighbors = [] input_pools = [] input_up_samples = [] for i in range(cfg.num_layers): neighbour_idx = tf.py_func(DP.knn_search, [batch_pc, batch_pc, cfg.k_n], tf.int32) sub_points = batch_pc[:, :tf.shape(batch_pc)[1] // cfg.sub_sampling_ratio[i], :] pool_i = neighbour_idx[:, :tf.shape(batch_pc)[1] // cfg.sub_sampling_ratio[i], :] up_i = tf.py_func(DP.knn_search, [sub_points, batch_pc, 1], tf.int32) input_points.append(batch_pc) input_neighbors.append(neighbour_idx) input_pools.append(pool_i) input_up_samples.append(up_i) batch_pc = sub_points input_list = input_points + input_neighbors + input_pools + input_up_samples input_list += [features, batch_label, batch_pc_idx, batch_cloud_idx] return input_list return tf_map def init_input_pipeline(self): print('Initiating input pipelines') cfg.ignored_label_inds = [self.label_to_idx[ign_label] for ign_label in self.ignored_labels] gen_function, gen_types, gen_shapes = self.get_batch_gen('training') gen_function_val, _, _ = self.get_batch_gen('validation') gen_function_test, _, _ = self.get_batch_gen('test') self.train_data = tf.data.Dataset.from_generator(gen_function, gen_types, gen_shapes) self.val_data = tf.data.Dataset.from_generator(gen_function_val, gen_types, gen_shapes) self.test_data = tf.data.Dataset.from_generator(gen_function_test, gen_types, gen_shapes) self.batch_train_data = self.train_data.batch(cfg.batch_size) self.batch_val_data = self.val_data.batch(cfg.val_batch_size) self.batch_test_data = self.test_data.batch(cfg.val_batch_size) map_func = self.get_tf_mapping2() self.batch_train_data = self.batch_train_data.map(map_func=map_func) self.batch_val_data = self.batch_val_data.map(map_func=map_func) self.batch_test_data = self.batch_test_data.map(map_func=map_func) self.batch_train_data = self.batch_train_data.prefetch(cfg.batch_size) self.batch_val_data = self.batch_val_data.prefetch(cfg.val_batch_size) self.batch_test_data = self.batch_test_data.prefetch(cfg.val_batch_size) iter = tf.data.Iterator.from_structure(self.batch_train_data.output_types, self.batch_train_data.output_shapes) self.flat_inputs = iter.get_next() self.train_init_op = iter.make_initializer(self.batch_train_data) self.val_init_op = iter.make_initializer(self.batch_val_data) self.test_init_op = iter.make_initializer(self.batch_test_data) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--gpu', type=int, default=0, help='the number of GPUs to use [default: 0]') parser.add_argument('--mode', type=str, default='train', help='options: train, test, vis') parser.add_argument('--test_area', type=str, default='14', help='options: 08, 11,12,13,14,15,16,17,18,19,20,21') parser.add_argument('--model_path', type=str, default='None', help='pretrained model path') FLAGS = parser.parse_args() os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" os.environ['CUDA_VISIBLE_DEVICES'] = str(FLAGS.gpu) os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' Mode = FLAGS.mode test_area = FLAGS.test_area dataset = SemanticKITTI(test_area) dataset.init_input_pipeline() if Mode == 'train': model = Network(dataset, cfg) model.train(dataset) elif Mode == 'test': cfg.saving = False model = Network(dataset, cfg) if FLAGS.model_path is not 'None': chosen_snap = FLAGS.model_path else: chosen_snapshot = -1 logs = np.sort([os.path.join('results', f) for f in os.listdir('results') if f.startswith('Log')]) chosen_folder = logs[-1] snap_path = join(chosen_folder, 'snapshots') snap_steps = [int(f[:-5].split('-')[-1]) for f in os.listdir(snap_path) if f[-5:] == '.meta'] chosen_step = np.sort(snap_steps)[-1] chosen_snap = os.path.join(snap_path, 'snap-{:d}'.format(chosen_step)) tester = ModelTester(model, dataset, restore_snap=chosen_snap) tester.test(model, dataset) else: ################## # Visualize data # ################## with tf.Session() as sess: sess.run(tf.global_variables_initializer()) sess.run(dataset.train_init_op) while True: flat_inputs = sess.run(dataset.flat_inputs) pc_xyz = flat_inputs[0] sub_pc_xyz = flat_inputs[1] labels = flat_inputs[17] Plot.draw_pc_sem_ins(pc_xyz[0, :, :], labels[0, :]) Plot.draw_pc_sem_ins(sub_pc_xyz[0, :, :], labels[0, 0:np.shape(sub_pc_xyz)[1]])
07-30
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值