Tensorflow指数衰减源码

本文介绍了一种在训练过程中调整学习率的方法——指数衰减。该方法根据全局步数(global_step)和预设的衰减步骤(decay_steps),以指数形式降低初始学习率。文章详细解释了如何使用TensorFlow实现这一功能,包括参数设置、代码实现及示例。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

"""Various learning rate decay functions."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import math

from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops


def exponential_decay(learning_rate, global_step, decay_steps, decay_rate,
                      staircase=False, name=None):
  """Applies exponential decay to the learning rate.
  When training a model, it is often recommended to lower the learning rate as
  the training progresses.  This function applies an exponential decay function
  to a provided initial learning rate.  It requires a `global_step` value to
  compute the decayed learning rate.  You can just pass a TensorFlow variable
  that you increment at each training step.

  The function returns the decayed learning rate.  It is computed as:

  python
  decayed_learning_rate = learning_rate *
                          decay_rate ^ (global_step / decay_steps)
  


    If the argument `staircase` is `True`, then `global_step / decay_steps` is an
      integer division and the decayed learning rate follows a staircase function.
    
      Example: decay every 100000 steps with a base of 0.96:

  python
  ...
  global_step = tf.Variable(0, trainable=False)
  starter_learning_rate = 0.1
  learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
                                             100000, 0.96, staircase=True)
  # Passing global_step to minimize() will increment it at each step.
  learning_step = (
      tf.train.GradientDescentOptimizer(learning_rate)
      .minimize(...my loss..., global_step=global_step)
  )
  
    Args:
        learning_rate: A scalar `float32` or `float64` `Tensor` or a
          Python number.  The initial learning rate.
        global_step: A scalar `int32` or `int64` `Tensor` or a Python number.
          Global step to use for the decay computation.  Must not be negative.
        decay_steps: A scalar `int32` or `int64` `Tensor` or a Python number.
          Must be positive.  See the decay computation above.
        decay_rate: A scalar `float32` or `float64` `Tensor` or a
          Python number.  The decay rate.
        staircase: Boolean.  If `True` decay the learning rate at discrete intervals
        name: String.  Optional name of the operation.  Defaults to
          'ExponentialDecay'.
    
      Returns:
        A scalar `Tensor` of the same type as `learning_rate`.  The decayed
        learning rate.
    
      Raises:
        ValueError: if `global_step` is not supplied.
      """

      if global_step is None:
        raise ValueError("global_step is required for exponential_decay.")
      	with ops.name_scope(name, "ExponentialDecay",
                          [learning_rate, global_step,
                           decay_steps, decay_rate]) as name:
        learning_rate = ops.convert_to_tensor(learning_rate, name="learning_rate")
        	dtype = learning_rate.dtype
        	global_step = math_ops.cast(global_step, dtype)
        	decay_steps = math_ops.cast(decay_steps, dtype)
        	decay_rate = math_ops.cast(decay_rate, dtype)
        	p = global_step / decay_steps
        	if staircase:
          	p = math_ops.floor(p)
        	return math_ops.multiply(learning_rate, math_ops.pow(decay_rate, p),
                                 name=name)
   


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值