TENSORFLOW GUIDE: EXPONENTIAL MOVING AVERAGE FOR IMPROVED CLASSIFICATION

本文介绍如何通过指数移动平均(EMA)改进分类器参数选择,提高模型泛化能力。利用TensorFlow实现EMA,并展示了如何在训练过程中应用及测试时切换到EMA变量。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Parameter Selection via Exponential Moving Average

When training a classifier via gradient decent, we update the current classifier’s parameters θθ via

θt+1=θt+αΔθt,θt+1=θt+αΔθt,

where θtθt is the current state of the parameters and ΔθtΔθt is the update step proposed by your favorite optimizer. Often times, after NN iterations, we simply stop the optimization procedure (where NN is chosen using some sort of decision rule) and use θNθN as our trained classifier’s parameters.

However, we often observe empirically that a post-processing step can be applied to improve the classifier’s performance. Once such example is Polyak averaging. A closely related—and quite popular—procedure is to take an exponential moving averaging (EMA) of the optimization trajectory (θn)(θn),

θema=(1λ)Ni=0λiθNi,θema=(1−λ)∑i=0NλiθN−i,

where λ[0,1)λ∈[0,1) is the decay rate or momemtum of the EMA. It’s a simple modification to the optimization procedure that often yields better generalization than simply selecting θNθN, and has also been used quite effectively in semi-supervised learning.

Implementation-wise, the best to apply EMA to a classifier is to use the built-in tf.train.ExponentialMovingAverage function. However, the documentation doesn’t provide a guide for how to cleanly use tf.train.ExponentialMovingAverage to construct an EMA-classifier. Since I’ve been playing with EMA recently, I thought that it would be helpful to write a gentle guide to implementing an EMA-classifier in Tensorflow.

Understanding tf.train.ExponentialMovingAverage

For those who wish to dive straight into the full codebase, you can find it here. For self-containedness, let’s start with the code that constructs the classifier.

def classifier(x, phase, scope='class', reuse=None, internal_update=False, getter=None):
    with tf.variable_scope(scope, reuse=reuse, custom_getter=getter):
        with arg_scope([leaky_relu], a=0.1), \
             arg_scope([conv2d, dense], activation=leaky_relu, bn=True, phase=phase), \
             arg_scope([batch_norm], internal_update=internal_update):

            x = conv2d(x, 96, 3, 1)
            x = conv2d(x, 96, 3, 1)
            x = conv2d(x, 96, 3, 1)
            x = max_pool(x, 2, 2)
            x = dropout(x, training=phase)
            x = conv2d(x, 192, 3, 1)
            x = conv2d(x, 192, 3, 1)
            x = conv2d(x, 192, 3, 1)
            x = max_pool(x, 2, 2)
            x = dropout(x, training=phase)
            x = conv2d(x, 192, 3, 1)
            x = conv2d(x, 192, 3, 1)
            x = conv2d(x, 192, 3, 1)
            x = avg_pool(x, global_pool=True)
            x = dense(x, 10, activation=None)

    return x

Here, I use a fairly standard CNN architecture. The first thing to note is the use of variable scoping. This puts all of the classifier’s variables within the scope class/. To create the classifier, simply call

train_y_pred = classifer(train_x, phase=True, internal_update=True)

Once the classifier is created in the computational graph, variable scoping allows for easy access of the classifier’s trainable variables via

var_class = tf.get_collection('trainable_variables', 'class') # Get list of the classifier's trainable variables
ema = tf.train.ExponentialMovingAverage(decay=0.998)
ema_op = ema.apply(var_class)

After getting the list of trainable variables via tf.get_collection, we use ema.apply, which serves two purposes. First, it constructs an auxiliary variable for each corresponding variable in var_class to hold the exponential moving average. Next, it returns a tensorflow Op which updates the EMA variables. The object ema can then access the EMA via the function ema.average

# Demonstration of ema.average
var_ema_at_index_0 = ema.average(var_class[0])

Populating the Classifier with the EMA Variables

So far, we’ve figured out how to create the EMA variables and how to access them. But what’s the easiest way to make the classifier use the EMA variables? Here, we leverage the custom_getter argument that appears in tf.variable_scope. According to documentation, whenever you call tf.get_variable, the default getter gets an existing tensor according to the tensor variable’s name. However, a custom getter can be applied to change the existing tensor that is returned by tf.get_variable.

To construct the custom getter, locally define ema_getter after you’ve already created the ema object

def ema_getter(getter, name, *args, **kwargs):
    var = getter(name, *args, **kwargs)
    ema_var = ema.average(var)
    return ema_var if ema_var else var

To apply the EMA classifier during test-time, we simply call classifier again, this time with the custom ema_getter

test_y_pred = classifer(test_x, phase=False, internal_update=False, getter=ema_getter)

And that’s it! We can now verify that applying EMA does in fact improve the performance of the classifier on the CIFAR-10 test data set.

My Image

You can find the full code for training the CIFAR-10 classifier below.


CODE ON GITHUB


http://ruishu.io/2017/11/22/ema/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值