前言
不定期更新学习Tensorflow过程的笔记。
tf.nn.embedding_lookup(embedding, ids = [])
在 Tensorflow里Variable变量并不能像python里数组或者Numpy数组用下标访问,可以使用如下图类似的矩阵相乘操作来选取特定维度,比如我们想选右边第四行的向量。
在Tensorflow里则有tf.nn.embedding_lookup(embddings, ids = [])
操作类似查表来选取里特定的向量,在word2vec, transE等算法里这个操作很常见。
import numpy as np
import tensorflow as tf
data = np.array(np.eye(4,4))
data = tf.convert_to_tensor(data)
lk = [0,1]
lookup_data = tf.nn.embedding_lookup(data,lk)
init = tf.global_variables_initializer()
with tf.Session() as sess:
print(sess.run(lookup_data))
[[1. 0. 0. 0.]
[0. 1. 0. 0.]]
tf.cond(pred, fn1, fn2, name=None)
类似c++里三目选择符,tensorflow里选择语句用tf.cond()
,来控制流的方向。
注意,fn1
, fn2
是一个函数 。
例子:
import tensorflow as tf
a=tf.constant(2)
b=tf.constant(3)
x=tf.constant(4)
y=tf.constant(5)
z = tf.multiply(a, b)
result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y))
with tf.Session() as session:
print(result.eval())
10
tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None)
tensorflow里对某个向量求l2范数的操作。
x为输入的向量;
dim为指定按哪个维度求l2范化,dim取值为0或1,0代表列,1代表行;
epsilon为l2范化的最小值边界;
例子:
import tensorflow as tf
input_data = tf.constant([[1.0,2,3],[4.0,5,6],[7.0,8,9]])
output1 = tf.nn.l2_normalize(input_data, dim = 0)
output2 = tf.nn.l2_normalize(input_data, dim = 1)
with tf.Session() as sess:
print(sess.run(output1))
print(sess.run(output2))
dim = 0 , 为按列进行l2范化
norm(1)=12+42+72=66norm(1) = \sqrt{1^2+4^2+7^2} = \sqrt{66}norm(1)=12+42+72=66
norm(2)=22+52+82=93norm(2) = \sqrt{2^2+5^2+8^2} = \sqrt{93}norm(2)=22+52+82=93
norm(3)=32+62+92=126norm(3) = \sqrt{3^2+6^2+9^2} = \sqrt{126}norm(3)=32+62+92=126
[[1./norm(1), 2./norm(2) , 3./norm(3) ]
[4./norm(1) , 5./norm(2) , 6./norm(3) ] =
[7./norm(1) , 8./norm(2) , 9./norm(3) ]]
[[0.12309149 0.20739034 0.26726127]
[0.49236596 0.51847583 0.53452253]
[0.86164045 0.82956135 0.80178374]]
dim =1 ,为按行进行l2范化
norm(1)=12+22+32=14norm(1) = \sqrt{1^2+2^2+3^2} = \sqrt{14}norm(1)=12+22+32=14
norm(2)=22+52+82=77norm(2) = \sqrt{2^2+5^2+8^2} = \sqrt{77}norm(2)=22+52+82=77
norm(3)=32+62+92=194norm(3) = \sqrt{3^2+6^2+9^2} = \sqrt{194}norm(3)=32+62+92=194
[[1./norm(1), 2./norm(1) , 3./norm(1) ]
[4./norm(2) , 5./norm(2) , 6./norm(2) ] =
[7./norm(3) , 8..norm(3) , 9./norm(3) ]]
[[0.12309149 0.20739034 0.26726127]
[0.49236596 0.51847583 0.53452253]
[0.86164045 0.82956135 0.80178374]]
tf.reduce_sum(input_tensor,axis=None,keep_dims=False,name=None)
tf.reduce_sum 是 tensor 求和的工具。其参数中:
axis =0,1代表队列,行求和。
keep_dims = True 表示保留原来维度,否则就降维。
x = tf.constant([[1, 1, 1], [1, 1, 1]])
tf.reduce_sum(x) ==> 6 #对tensor x求和,6个1相加
tf.reduce_sum(x, 0)=tf.reduce_sum(x, axis=0) ==> [2, 2, 2] #tensorflow中axis=0表示列,1表示行,所以上面是对x的列求和
tf.reduce_sum(x, 1) ==> [3, 3]
tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]] #对x求每一行的和,且保留原x的维度信息。
tf.reduce_sum(x, [0, 1]) ==> 6 #对x的列和行求和