1.制造batch迭代时怎么取下一个batch
sees.run(data_batches)即可
datasetAPI制作batch
https://www.tensorflow.org/guide/datasets#dataset_structure
# batch的制作和利用
max_value = tf.placeholder(tf.int64, shape=[])
dataset = tf.data.Dataset.range(max_value)
data_batches = dataset.batch(2)
iterator = data_batches.make_initializable_iterator()
next_element = iterator.get_next()
# Initialize an iterator over a dataset with 10 elements.
sess.run(iterator.initializer, feed_dict={max_value: 10})
for i in range(5):
value = sess.run(next_element)
print(value)
2.读取数据卡住https://blog.youkuaiyun.com/weixin_43413958/article/details/84886940
加代码
coord = tf.train.Coordinator()
thread = tf.train.start_queue_runners(sess, coord)
3.tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(5, 1280), b.shape=(1280, 320), m=5, n=320, k=1280
参考https://rylan.iteye.com/blog/2386155
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))