course2-Operations

本文介绍了TensorFlow中的基本操作,包括常量、变量和占位符。详细讲解了tensorboard的使用,常量的创建方式如填充特定值、序列和随机生成。还探讨了TensorFlow数据类型与NumPy数据类型的差异,并建议尽可能使用TF DType。另外,解释了为什么不应过度使用常量,因为它们会增加图加载的成本。变量和占位符的概念也得到阐述,占位符允许在运行时提供数据,使程序更具灵活性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Operations


keywords: Basic operations, constants, variables ,Control dependencies ,Data pipeline ,TensorBoard


tensorboard

import tensorflow as tf
a = tf.constant(2)
b = tf.constant(3)
x = tf.add(a, b) 
# Create the summary writer after graph definition and before running your session
#‘graphs’ or any location where you want to keep your event files
writer = tf.summary.FileWriter('./graphs', tf.get_default_graph())
with tf.Session() as sess:
    # writer = tf.summary.FileWriter('./graphs', sess.graph) 
    print(sess.run(x))
    writer.close() # close the writer when you’re done using it

#Go to terminal, run:
python [yourprogram].py
tensorboard --logdir="./graphs" --port 6006  #6006 or any port you want
#Then open your browser and go to: http://localhost:6006/

constants

import tensorflow as tf
a = tf.constant([2, 2], name='a')
b = tf.constant([[0, 1], [2, 3]], name='b')
'''tf.constant(
    value,
    dtype=None,
    shape=None,
    name='Const',
    verify_shape=False
)'''
#Broadcasting similar to NumPy
x = tf.multiply(a, b, name='mul')
with tf.Session() as sess:
    print(sess.run(x))
>>>[[0 2]
   [4 6]]

Tensors filled with a specific value

'''tf.zeros(shape, dtype=tf.float32, name=None)
creates a tensor of shape and all elements will be zeros,Similar to numpy.zeros'''
tf.zeros([2, 3], tf.int32)
>>>[[0, 0, 0], [0, 0, 0]]
'''tf.zeros_like(input_tensor, dtype=None, name=None, optimize=True)
creates a tensor of shape and type (unless type is specified) as the input_tensor but all elements are zeros,Similar to numpy.zeros_like'''
# input_tensor is [[0, 1], [2, 3], [4, 5]]
tf.zeros_like(input_tensor)
>>>[[0, 0], [0, 0], [0, 0]]
#Similar to numpy.ones, numpy.ones_like
tf.ones(shape, dtype=tf.float32, name=None)
tf.ones_like(input_tensor, dtype=None, name=None, optimize=True)
#tf.fill(dims, value, name=None),creates a tensor filled with a scalar value.Similar to NumPy.full
tf.fill([2, 3], 8) ==> [[8, 8, 8], [8, 8, 8]]

Constants as sequences

#tf.lin_space(start, stop, num, name=None)
tf.lin_space(10.0, 13.0, 4)
>>>[10. 11. 12. 13.]
#tf.range(start, limit=None, delta=1, dtype=None, name='range')
tf.range(3, 18, 3)  
>>>[3 6 9 12 15]
tf.range(5)
>>>[0 1 2 3 4]

Randomly Generated Constants

tf.random_normal
tf.truncated_normal
tf.random_uniform
tf.random_shuffle
tf.random_crop
tf.multinomial
tf.random_gamma
tf.set_random_seed(seed)

Operations

这里写图片描述

Arithmetic Ops

Pretty standard, quite similar to numpy.
这里写图片描述

Wizard of Div

a = tf.constant([2, 2], name='a')
b = tf.constant([[0, 1], [2, 3]], name='b')
'''tf.div does TensorFlow’s style division, while tf.divide does exactly Python’s style division.'''
with tf.Session() as sess:
    print(sess.run(tf.div(b, a)))             ⇒ [[0 0] [1 1]]
    print(sess.run(tf.divide(b, a)))          ⇒ [[0. 0.5] [1. 1.5]]
    print(sess.run(tf.truediv(b, a)))         ⇒ [[0. 0.5] [1. 1.5]]
    print(sess.run(tf.floordiv(b, a)))        ⇒ [[0 0] [1 1]]
    print(sess.run(tf.realdiv(b, a)))         ⇒ # Error: only works for real values
    print(sess.run(tf.truncatediv(b, a)))     ⇒ [[0 0] [1 1]]
    print(sess.run(tf.floor_div(b, a)))       ⇒ [[0 0] [1 1]]

TensorFlow Data Types

TensorFlow takes Python natives types: boolean, numeric (int, float), strings

t_0 = 19                                # scalars are treated like 0-d tensors
tf.zeros_like(t_0)                              # ==> 0
tf.ones_like(t_0)                               # ==> 1
t_1 = [b"apple", b"peach", b"grape"]    # 1-d arrays are treated like 1-d tensors
tf.zeros_like(t_1)                              # ==> [b'' b'' b'']
tf.ones_like(t_1)                               # ==> TypeError: Expected string, got 1 of type 'int' instead.
t_2 = [[True, False, False],
  [False, False, True],
  [False, True, False]]
tf.zeros_like(t_2)                              # ==> 3x3 tensor, all elements are False
tf.ones_like(t_2)                               # ==> 3x3 tensor, all elements are True

这里写图片描述

TF vs NP Data Types

#TensorFlow integrates seamlessly with NumPy
tf.int32 == np.int32            # ⇒ True

#Can pass numpy types to TensorFlow ops
tf.ones([2, 2], np.float32)     # ⇒ [[1.0 1.0], [1.0 1.0]]

#For tf.Session.run(fetches): if the requested fetch is a Tensor , output will be a NumPy ndarray.
sess = tf.Session()
    a = tf.zeros([2, 3], np.int32)
    print(type(a))              # ⇒ <class 'tensorflow.python.framework.ops.Tensor'>
    a = sess.run(a)
    print(type(a))              # ⇒ <class 'numpy.ndarray'>

Use TF DType when possible

It’s possible to convert the data into the appropriate type when you pass it into TensorFlow, but certain data types still may be difficult to declare correctly, such as complex numbers. Because of this, it is recommended to created hand-defined Tensor objects as NumPy arrays.

  • Python native types: TensorFlow has to infer Python type
  • NumPy arrays: NumPy is not GPU compatible

What’s wrong with constants

Constants are stored in the graph definition

my_const = tf.constant([1.0, 2.0], name="my_const")
with tf.Session() as sess:
    print(sess.graph.as_graph_def())

这里写图片描述
This makes loading graphs expensive when constants are big

Variables

#create variables with tf.Variable
s = tf.Variable(2, name="scalar") 
m = tf.Variable([[0, 1], [2, 3]], name="matrix") 
W = tf.Variable(tf.zeros([784,10]))

#create variables with tf.get_variable
s = tf.get_variable("scalar", initializer=tf.constant(2)) 
m = tf.get_variable("matrix", initializer=tf.constant([[0, 1], [2, 3]]))
W = tf.get_variable("big_matrix", shape=(784, 10), initializer=tf.zeros_initializer())

#推荐第二种方法
#tf.constant is an op,tf.Variable is a class with many ops
#create variables with tf.get_variable
s = tf.get_variable("scalar", initializer=tf.constant(2)) 
m = tf.get_variable("matrix", initializer=tf.constant([[0, 1], [2, 3]]))
W = tf.get_variable("big_matrix", shape=(784, 10), initializer=tf.zeros_initializer())
tf.Variable holds several ops:
x = tf.Variable(...) 

x.initializer # init op
x.value() # read op
x.assign(...) # write op
x.assign_add(...) # and more

with tf.Session() as sess:
     print(sess.run(W)) #>> FailedPreconditionError: Attempting to use uninitialized value Variable

#You have to initialize your variables
#Initializer is an op. You need to execute it within the context of a session
#The easiest way is initializing all variables at once:
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
#Initialize only a subset of variables:
with tf.Session() as sess:
    sess.run(tf.variables_initializer([a, b]))
#Initialize a single variable
W = tf.Variable(tf.zeros([784,10]))
with tf.Session() as sess:
    sess.run(W.initializer)
# W is a random 700 x 100 variable object,not a variable
#Eval() a variable

W=tf.Variable(tf.truncated_normal([700, 10]))
with tf.Session() as sess:
    sess.run(W.initializer)
    print(W)  #>> Tensor("Variable/read:0", shape=(700, 10), dtype=float32)
    print(W.eval()) 
>>>[[-0.76781619 -0.67020458  1.15333688 ..., -0.98434633 -1.25692499
  -0.90904623]
 [-0.36763489 -0.65037876 -1.52936983 ...,  0.19320194 -0.38379928
   0.44387451]
 [ 0.12510735 -0.82649058  0.4321366  ..., -0.3816964   0.70466036
   1.33211911]
 ..., 
 [ 0.9203397  -0.99590844  0.76853162 ..., -0.74290705  0.37568584
   0.64072722]
 [-0.12753558  0.52571583  1.03265858 ...,  0.59978199 -0.91293705
  -0.02646019]
 [ 0.19076447 -0.62968266 -1.97970271 ..., -1.48389161  0.68170643
   1.46369624]]

#tf.Variable.assign()用法
W = tf.Variable(10)
W.assign(100)
with tf.Session() as sess:
    sess.run(W.initializer)
    print(W.eval())  # >> 10
#W.assign(100) creates an assign op.That op needs to be executed in a session to take effect.

W = tf.Variable(10)
assign_op = W.assign(100)
with tf.Session() as sess:
    sess.run(W.initializer)
    sess.run(assign_op)
    print(W.eval())  # >> 100

#Each session maintains its own copy of variables

W = tf.Variable(10)
sess1 = tf.Session()
sess2 = tf.Session()
sess1.run(W.initializer)
sess2.run(W.initializer)
print(sess1.run(W.assign_add(10)))      # >> 20
print(sess2.run(W.assign_sub(2)))       # >> 8
sess1.close()
sess2.close()

placeholders

Why placeholders?

We, or our clients, can later supply their own data when they need to execute the computation.

A TF program often has 2 phases:
  • Assemble a graph
  • Use a session to execute operations in the graph.
    ⇒ Assemble the graph first without knowing the values needed for computation
    Analogy: Define the function f(x, y) = 2 * x + y without knowing value of x or y. x, y are placeholders for the actual values.

Supplement the values to placeholders using a dictionary

#create a placeholder for a vector of 3 elements, type tf.float32
a = tf.placeholder(tf.float32, shape=[3])
b = tf.constant([5, 5, 5], tf.float32)
#use the placeholder as you would a constant or a variable
c = a + b  # short for tf.add(a, b)
with tf.Session() as sess:
    print(sess.run(c, feed_dict={a: [1, 2, 3]}))    #the tensor a is the key, not the string ‘a’ 
# >> [6, 7, 8]
#Feeding values to TF ops 
a = tf.add(2, 5)
b = tf.multiply(a, 3)
with tf.Session() as sess:
    #compute the value of b given a is 15
    sess.run(b, feed_dict={a: 15})          # >> 45

lazy loading

Lazy loading Example

#Normal loading
x = tf.Variable(10, name='x')
y = tf.Variable(20, name='y')
z = tf.add(x, y)        # create the node before executing the graph

writer = tf.summary.FileWriter('./graphs/normal_loading', tf.get_default_graph())
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for _ in range(10):
        sess.run(z)
writer.close()

#Lazy loading
x = tf.Variable(10, name='x')
y = tf.Variable(20, name='y')
writer = tf.summary.FileWriter('./graphs/normal_loading', tf.get_default_graph())
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for _ in range(10):
        sess.run(tf.add(x, y)) # someone decides to be clever to save one line of code
writer.close()

Normal loading
这里写图片描述

Lazy loading
这里写图片描述

Separate definition of ops from computing/running ops

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值