【tensorflow】tensorflow:Scale of 0 disables regularizer原因

本文解析了在使用TensorFlow时遇到的Scaleof0disablesregularizer警告,详细介绍了该警告的原因在于使用tf.contrib.layers.l2_regularizer时将scale参数设置为0.0,从而禁用了正则化。并提供了解决方案,建议将正则系数设为大于0的数,或在不需要正则化时取消正则操作。
部署运行你感兴趣的模型镜像

tensorflow:Scale of 0 disables regularizer原因

- 在运行tensorflow时候,遇到日志输出:

INFO:tensorflow:Scale of 0 disables regularizer.

开始以为是哪里出错了,仔细检查了代码后发现,原因是:使用tf.contrib.layers.l2_regularizer时候将scale参数,也就是l2正则强度设为了0.0,看以下源码:

def l2_regularizer(scale, scope=None):
  """Returns a function that can be used to apply L2 regularization to weights.

  Small values of L2 can help prevent overfitting the training data.

  Args:
    scale: A scalar multiplier `Tensor`. 0.0 disables the regularizer.
    scope: An optional scope name.

  Returns:
    A function with signature `l2(weights)` that applies L2 regularization.

  Raises:
    ValueError: If scale is negative or if scale is not a float.
  """
  if isinstance(scale, numbers.Integral):
    raise ValueError('scale cannot be an integer: %s' % (scale,))
  if isinstance(scale, numbers.Real):
    if scale < 0.:
      raise ValueError('Setting a scale less than 0 on a regularizer: %g.' %
                       scale)
    if scale == 0.:
      logging.info('Scale of 0 disables regularizer.')
      return lambda _: None

  def l2(weights):
    """Applies l2 regularization to weights."""
    with ops.name_scope(scope, 'l2_regularizer', [weights]) as name:
      my_scale = ops.convert_to_tensor(scale,
                                       dtype=weights.dtype.base_dtype,
                                       name='scale')
      return standard_ops.multiply(my_scale, nn.l2_loss(weights), name=name)

  return l2

可见最终原因就是这个。

- 解决方案:

from tensorflow.contrib.layers import l2_regularizer
l2_reg = l2_regularizer(0.001)

也就是将正则系数设为大于0的数,或者不需要正则化的时候取消正则操作。

您可能感兴趣的与本文相关的镜像

TensorFlow-v2.15

TensorFlow-v2.15

TensorFlow

TensorFlow 是由Google Brain 团队开发的开源机器学习框架,广泛应用于深度学习研究和生产环境。 它提供了一个灵活的平台,用于构建和训练各种机器学习模型

To disable all TensorFlow 2.x features and enable TensorFlow 1.x behavior compatibility, you can use the `tensorflow.compat.v1` module. This module provides a way to write code that behaves as if you were using TensorFlow 1.x, even when running on TensorFlow 2.x. Here is how you can configure your environment to run TensorFlow in 1.x mode: ### Disable Eager Execution TensorFlow 2.x enables eager execution by default, which allows for immediate evaluation of operations without needing to explicitly run them in a session. To disable this feature and revert to the graph-based execution model of TensorFlow 1.x, use the following code: ```python import tensorflow as tf # Disable TensorFlow 2.x behavior tf.compat.v1.disable_eager_execution() ``` This function ensures that the code does not execute eagerly and instead builds computational graphs, similar to how TensorFlow 1.x operates [^3]. ### Use TensorFlow 1.x APIs via `tf.compat.v1` When writing models or logic that depends on TensorFlow 1.x APIs, use the `tf.compat.v1` namespace. For example: ```python import tensorflow as tf # Accessing placeholders and sessions from TensorFlow 1.x x = tf.compat.v1.placeholder(tf.float32, shape=[None, 784]) w = tf.compat.v1.Variable(tf.compat.v1.random_normal([784, 10])) b = tf.compat.v1.Variable(tf.compat.v1.zeros([10])) y = tf.matmul(x, w) + b with tf.compat.v1.Session() as sess: sess.run(tf.compat.v1.global_variables_initializer()) # Perform computations here ``` By importing and utilizing `tf.compat.v1`, users can access legacy components like placeholders, variables, and sessions, making it easier to migrate older codebases to TensorFlow 2.x while maintaining backward compatibility [^3]. ### Enable Compatibility Mode (Optional) If you are working with Keras models or high-level APIs but still want to ensure compatibility with TensorFlow 1.x, you can also globally set TensorFlow to behave in a manner consistent with earlier versions: ```python import tensorflow as tf # Disable V2 behaviors tf.compat.v1.disable_v2_behavior() # Build your model using compat.v1 APIs ``` This method disables various TensorFlow 2.x-specific optimizations and defaults, allowing for smoother integration of legacy codebases. ### Summary The key steps involve disabling eager execution, using `tf.compat.v1` for placeholders/sessions, and optionally calling `disable_v2_behavior()` to ensure full backward compatibility. These techniques allow developers to run TensorFlow 1.x-style code in a TensorFlow 2.x environment seamlessly [^3]. ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值