
tf_new
luoganttcc
微信:luogantt2
展开
-
tf.broadcast_to
import tensorflow as tfx = tf.constant([1, 2, 3])y = tf.broadcast_to(x, [5, 3])print(y)tf.Tensor([[1 2 3] [1 2 3] [1 2 3] [1 2 3] [1 2 3]], shape=(5, 3), dtype=int32)原创 2021-06-22 14:11:57 · 309 阅读 · 0 评论 -
tf.broadcast_dynamic_shape
计算给定符号形状的广播的形状,1.广播的原则如果两个数组的后缘维度(trailing dimension,即从末尾开始算起的维度)的轴长度相符,2.或其中的一方的长度为1,则认为它们是广播兼容的。广播会在缺失和(或)长度为1的维度上进行。张量的广播机制import tensorflow as tfshape_x = (6, 3)shape_y = (5, 1, 3)c=tf.broadcast_dynamic_shape(shape_x, shape_y)print(c)原创 2021-06-22 12:44:12 · 348 阅读 · 0 评论 -
张量的广播机制
tensorflow 的字面翻译是----张量(tensor)—流(flow)为什么要引入张量的广播机制,原因是有利可图,这个利益就是能节约内存空间,说白了就是省钱!!!凡事都有两面,既然能省钱,那一定费脑子,哲学一点的说法就是’抽象’对于张量的加法,乘法,不要求两个张量的维度完全相同,对于有很多重复元素的张量而言,可以用一个低纬度张量代表高维张量广播的原则如果两个数组的后缘维度(trailing dimension,即从末尾开始算起的维度)的轴长度相符,或其中的一方的长度为1,则认为它们是广播原创 2021-06-15 15:34:40 · 2661 阅读 · 2 评论 -
tf.argmax tf2版本
对于tf.argmax,这个函数有点奇怪,axis=0指的是计算矩阵每列的最大值索引,axis=1计算行最大值索引与numpy 相同import tensorflow as tfimport numpy as npa=np.array([[2,4,5,7],[9,3,6,2]])print('-'*30+'分割线'+'-'*30)print(a)print('-'*30+'分割线'+'-'*30)a1=tf.argmax(a,axis=0)print('tf.argmax(a,ax原创 2021-06-15 13:14:32 · 221 阅读 · 0 评论 -
tf.reduce_max用法
对于tf.reduce_max,这个函数有点奇怪,axis=0指的是计算矩阵每列的最大值,axis=1计算行最大值与numpy 不同import tensorflow as tfimport numpy as npa=np.array([[2,4,5,7],[9,3,6,2]])print('a=\n',a)print('-'*30+'分割线'+'-'*30)a1=tf.reduce_max(a,axis=0)print( 'tf.reduce_max(a,axis=0)=\n'原创 2021-06-15 12:30:36 · 1294 阅读 · 0 评论 -
tf.squeeze
从张量的形状中删除大小为 1 的维度import tensorflow as tfa=tf.constant([[[1,2,3,1]]])print(a.shape)a1=tf.squeeze(a)print(a1.shape)(1, 1, 4)(4,)文档链接原创 2021-06-15 12:07:04 · 122 阅读 · 0 评论 -
在 tensorflow 和numpy 中矩阵的加法
Am×n×..×km \times n \times ..\times km×n×..×k 和A1×k1\times k1×k 矩阵相加相当于在最后一个维度import numpy as np a=np.array(range(3*4)).reshape([3,4])b=np.array([0.2]*4)print('a.shape=',a.shape)print('b.shape=',b.shape)print(a+b)print('-'*20+'我是分割线'+'-'*20)原创 2021-06-13 13:14:20 · 242 阅读 · 0 评论 -
yolov3 数据预处理
githubimport tensorflow as tffrom absl.flags import FLAGS@tf.functiondef transform_targets_for_output(y_true, grid_size, anchor_idxs): #这个函数分别对比某一类anchors (一共是三类,每一类对应不同的尺寸的box) #每一类box 对应的尺寸翻倍 # y_true: (N, boxes, (x1, y1, x2, y2, cl原创 2021-06-10 20:20:24 · 521 阅读 · 0 评论 -
tf.tensor_scatter_nd_update
对应位置的索引赋值这里是一维坐标赋值import tensorflow as tftensor = [0, 0, 0, 0, 0, 0, 0, 0] # tf.rank(tensor) == 1indices =[ [1], [3],[4], [7]] # num_updates == 4, index_depth == 1updates = [ 9, 10, 11, 12] # num_updates == 4print(tf.原创 2021-06-10 19:12:30 · 1911 阅读 · 0 评论 -
tensorflow 动态数组 TensorArray
tensorflow 动态数组随时可以读取import tensorflow as tfta = tf.TensorArray(tf.float32, size=0, dynamic_size=True, clear_after_read=False)ta = ta.write(0, 10)ta = ta.write(1, 20)ta = ta.write(2, 30)print(ta.read(0))print(ta.read(1))print(ta.read(2))prin原创 2021-06-10 14:14:23 · 448 阅读 · 0 评论 -
tf.minimum(A,B)
import tensorflow as tftf.minimum([5,2,3],[2,3,4]) <tf.Tensor: shape=(3,), dtype=int32, numpy=array([2, 2, 3], dtype=int32)>tf.minimum([[3,6,7],[5,2,3]],[2,3,4])<tf.Tensor: shape=(2, 3), dtype=int32, numpy=array([[2, 3, 4], [2, 2,原创 2021-06-09 18:43:26 · 396 阅读 · 0 评论 -
tf.expand_dims
import tensorflow as tfarr=tf.constant([[1,2,3],[4,5,6]])print(arr)print('-'*30)for k in range(3): cc1=tf.expand_dims(arr, axis=k) print(cc1) # print(cc1.shape.as_list()) print('*'*30)原创 2021-06-09 16:07:12 · 119 阅读 · 0 评论 -
tf.lookup.StaticHashTable 用法
tf.lookup.StaticHashTable 本质是tensorflow 内置字典,在yolov3 tf代码中多次应用def load_tfrecord_dataset(file_pattern, class_file, size=416): LINE_NUMBER = -1 # TODO: use tf.lookup.TextFileIndex.LINE_NUMBER class_table = tf.lookup.StaticHashTable(tf.lookup.TextF原创 2021-06-08 16:25:00 · 1488 阅读 · 0 评论 -
tf.data.Dataset 用法
tf.data.DatasetAPI支持写入的描述性和高效的输入管线。Dataset用法遵循一个常见模式:从输入数据创建源数据集。应用数据集转换来预处理数据。迭代数据集并处理元素。迭代以流式方式发生,因此不需要将完整数据集放入内存中。import tensorflow as tfdataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])for element in dataset: print(element)dataset =原创 2021-06-08 16:12:13 · 873 阅读 · 0 评论 -
esmm
添加链接描述https://github.com/wziji/deep_ctr/tree/master/ESMM原创 2020-11-30 20:33:49 · 342 阅读 · 0 评论 -
tf.scan
import tensorflow as tfimport numpy as npelems = np.array([1, 2, 3, 4, 5, 6])sum = tf.scan(lambda a, x: a + x, elems)# sum == [1, 3, 6, 10, 15, 21]原创 2020-08-24 16:09:01 · 306 阅读 · 0 评论 -
tf.roll
t =[0, 1, 2, 3, 4]tf.roll(t, shift=2, axis=0) <tf.Tensor: shape=(5,), dtype=int32, numpy=array([3, 4, 0, 1, 2], dtype=int32)>这里的 axis=[0]比较奇怪,0轴表示列,1轴表示行t=[[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]t1=tf.roll(t, shift=[1], axis=[0])print('t1=',t1)t2原创 2020-08-13 14:05:19 · 912 阅读 · 0 评论 -
tf.reverse_sequence
tf.reverse_sequence 只翻转前n个数据,seq_lengths = [7, 2, 3, 5]就是inptuts 第一行的前7个,第二行的前2个…import tensorflow as tfseq_lengths = [7, 2, 3, 5]inputs = [[1, 2, 3, 4, 5, 0, 0, 0], [1, 2, 0, 0, 0, 0, 0, 0], [1, 2, 3, 4, 0, 0, 0, 0], [1, 2, 3, 4, 5, 6, 7, 8]]原创 2020-08-13 12:01:43 · 270 阅读 · 0 评论 -
tf.reverse
在之前的博客中我说过,一维度的矩阵是队列二维度的矩阵是方阵三维度的矩阵是大楼四维度的矩阵是小区我以三维矩阵来说 tf.reverse的意义,可以把它想象成一座大楼import tensorflow as tf t=tf.constant( [ [[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[ 10,原创 2020-08-13 11:46:33 · 770 阅读 · 0 评论 -
tf.reshape
import tensorflow as tft=tf.constant([1, 2, 3, 4, 5, 6, 7, 8, 9])tf.reshape(t, [3, 3]) t=tf.constant( [ [[1, 1], [2, 2]], [[3, 3], [4, 4]] ])# tensor 't' has shape [2, 2, 2]tf.reshape(t, [2, 4]) <tf.Tensor: shape=(2, 4), dtype=int原创 2020-08-07 16:15:58 · 202 阅读 · 0 评论 -
tf.repeat
import tensorflow as tftf.repeat(['a', 'b', 'c'], repeats=[3, 0, 2], axis=0)<tf.Tensor: shape=(5,), dtype=string, numpy=array([b'a', b'a', b'a', b'c', b'c'], dtype=object)>repeat([[1, 2], [3, 4]], repeats=[2, 3], axis=0)<tf.Tensor: shape=(5原创 2020-08-07 15:37:05 · 1771 阅读 · 0 评论 -
tf.rank()
import tensorflow as tf# shape of tensor 't' is [2, 2, 3]t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])tf.rank(t) # 3张量的秩与矩阵的秩不一样.张量的秩是唯一选择张量的每个元素所需的索引的数量.可以理解为维度秩也被称为 “order”,“degree” 或 “ndims”....原创 2020-08-05 20:35:47 · 322 阅读 · 0 评论 -
tf.ones 的一些用法
tf.ones([3, 4], tf.int32)<tf.Tensor: shape=(3, 4), dtype=int32, numpy=array([[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], dtype=int32)>def make_variables(k, initializer): return (tf.Variable(initializer(shape=[k], dtype=tf.float32)原创 2020-08-04 10:37:36 · 3474 阅读 · 0 评论 -
tf.Module
通过继承tf.Module代替object任何tf.Variable或tf.Module分配给对象的属性的实例可以使用被收集variables , trainable_variables或submodules属性:import tensorflow as tf class Dense(tf.Module): def __init__(self, in_features, out_features, name=None): super(Dense, self).__init__..原创 2020-08-03 20:59:48 · 718 阅读 · 0 评论 -
tf.map_fn
import numpy as npimport tensorflow as tfelems = np.array([1, 2, 3, 4, 5, 6])tf.map_fn(lambda x: x * x, elems)<tf.Tensor: shape=(6,), dtype=int64, numpy=array([ 1, 4, 9, 16, 25, 36])>原创 2020-08-03 20:45:44 · 160 阅读 · 0 评论 -
tf.foldl
import tensorflow as tfelems = tf.constant([1, 2, 3, 4, 5, 6])sum1 = tf.foldl(lambda a, x: a * x, elems)print(sum1.numpy())sum2 = tf.foldl(lambda a, x: a + x, elems)print(sum2.numpy())原创 2020-08-03 19:48:47 · 197 阅读 · 0 评论 -
tf.fill
tf.fill([2, 3], 9)<tf.Tensor: shape=(2, 3), dtype=int32, numpy=array([[9, 9, 9], [9, 9, 9]], dtype=int32)>原创 2020-08-03 14:22:09 · 1020 阅读 · 0 评论 -
tf.eye
tf.eye(2)<tf.Tensor: shape=(2, 2), dtype=float32, numpy=array([[1., 0.], [0., 1.]], dtype=float32)>tf.eye(2, batch_shape=[3])<tf.Tensor: shape=(3, 2, 2), dtype=float32, numpy=array([[[1., 0.], [0., 1.]], [[1., 0.],原创 2020-08-03 14:20:12 · 2704 阅读 · 0 评论 -
tf.expand_dims和 tf.squeeze(cc)
import tensorflow as tfimage = tf.zeros([10,10,3])print(image.shape.as_list())print(tf.expand_dims(image, axis=0).shape.as_list())print(tf.expand_dims(image, axis=1).shape.as_list())cc=tf.expand_dims(image, -1)print(cc.shape.as_list())print(原创 2020-08-03 14:14:16 · 184 阅读 · 0 评论 -
tf.executing_eagerly()
import osimport tensorflow as tfimport cProfiletf.executing_eagerly() x = [[2.]]m = tf.matmul(x, x)print("hello, {}".format(m))hello, [[4.]]原创 2020-08-03 13:25:24 · 846 阅读 · 0 评论 -
tf.ensure_shape 感觉这个功能有点鸡肋
x = tf.constant([1,2,3])print(x.shape)x = tf.ensure_shape(x, [3])原创 2020-08-03 11:39:00 · 325 阅读 · 1 评论 -
tf.edit_distance Levenshtein距离
import tensorflow as tf # 'hypothesis' is a tensor of shape `[2, 1]` with variable-length values:# (0,0) = ["a"]# (1,0) = ["b"]hypothesis = tf.sparse.SparseTensor( [[0, 0, 0], [1, 0, 0]], ["a", "b"], (2, 1, 1))tf.sparse.to_dense(原创 2020-08-03 11:35:13 · 398 阅读 · 0 评论 -
tf.sparse.SparseTensor
为什么用tf.sparse.SparseTensor的原因节约内存在表达一个稀疏矩阵时候,不用一个很大的矩阵tf.sparse.SparseTensor( indices, values, dense_shape)indices :非零值的元素的索引values :非零值dense_shape :shapeimport tensorflow as tfsp_input=tf.sparse.SparseTensor(indices=[[0, 0], [原创 2020-08-02 22:08:26 · 1587 阅读 · 1 评论 -
tf.dynamic_stitch 和 tf.dynamic_partition
import tensorflow as tfx=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4])#判断x里面的元素是否是1condition_mask=tf.not_equal(x,tf.constant(-1.))#[ True, False, True, True, False, True]#将张量拆成两个,按照condition_mask的对应位置partitioned_data = tf.dynamic_partition(原创 2020-08-02 21:10:57 · 524 阅读 · 0 评论 -
tf.convert_to_tensor
import tensorflow as tf import numpy as npdef my_func(arg): arg = tf.convert_to_tensor(arg, dtype=tf.float32) return arg# The following calls are equivalent.value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]]))print(value_1)value_2 = my_.原创 2020-08-01 19:48:15 · 260 阅读 · 0 评论 -
tf.constant_initializer
import tensorflow as tfdef make_variables(k, initializer): return (tf.Variable(initializer(shape=[k], dtype=tf.float32)), tf.Variable(initializer(shape=[k, k], dtype=tf.float32)))v1, v2 = make_variables(3, tf.constant_initializer(2.))v1Out原创 2020-08-01 19:43:51 · 662 阅读 · 0 评论 -
tf.cond
import tensorflow as tfx = tf.constant(12)y = tf.constant(5)def f1(): return tf.multiply(x, 17)def f2(): return tf.add(y, 23)r = tf.cond(tf.less(x, y), f1, f2)print(r.numpy())如果 x<y,就执行函数 f1 , 否则 就执行 f228原创 2020-07-31 15:45:25 · 168 阅读 · 0 评论 -
tf.concat
import tensorflow as tft1 = [[1, 2, 3], [4, 5, 6]]t2 = [[7, 8, 9], [10, 11, 12]]tf.concat([t1, t2], 0)<tf.Tensor: shape=(4, 3), dtype=int32, numpy=array([[ 1, 2, 3], [ 4, 5, 6], [ 7, 8, 9], [10, 11, 12]], dtype=int32)>原创 2020-07-31 15:37:49 · 115 阅读 · 0 评论 -
tf.clip_by_value
t = tf.constant([[-10., -1., 0.], [0., 2., 10.]])t2 = tf.clip_by_value(t, clip_value_min=-1, clip_value_max=1)t2.numpy()array([[-1., -1., 0.], [ 0., 1., 1.]], dtype=float32)原创 2020-07-31 15:35:08 · 152 阅读 · 0 评论 -
tf.clip_norm
t= [ x1x_{1}x1, x2x_{2}x2,… x2x_{2}x2]l2=∑i=0nxi2\sqrt{\displaystyle\sum\limits_{i=0}^n x_i^2}i=0∑nxi2norm = 2.0clip_norm=t∗norml2=\frac{t*norm}{l2}=l2t∗normimport numpy as npt=np.array([[1, 2, 3, 4, 5]])l2norm4t = np.linalg.norm(t)clip_原创 2020-07-31 15:02:57 · 277 阅读 · 0 评论