The building blocks of Deep Learning

本文探讨了深度网络中节点的工作原理及梯度计算,分析了不同类型的节点及其在训练过程中的作用,并讨论了权重共享、ReLU和全连接层的具体实现。此外,还介绍了如何使用LMDB数据库存储大型数据集,并深入研究了网络初始化对收敛性的影响。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

A feed-forward network is built up of nodes that make a directed acyclic graph (DAG). This post will focus on how a single node works and what we need to implement if we want to define one. It is aimed at people who generally know how deep networks work, but can still be confused about exactly what gradients need to be computed for each node (i.e. myself, all the time).

Network

In our network, we will have three different types of nodes (often called layers as well):

  • Static data (data and labels)
  • Dynamic data (parameters)
  • Functions

This is a bit different from the traditional take on nodes, since we are not allowing nodes to have any internal parameters. Instead, parameters will be fed into function nodes as dynamic data. A network with two fully connected layers may look like this:

node

The static data nodes are light blue and the dynamic data nodes (parameter nodes) are orange.

To train this network, all we need is the derivative of the loss,  L L, with respect to each of the parameter nodes. For this, we need to consider the canonical building block, the function node:

node

It takes any number of inputs and produces an output (that eventually leads to the loss). In the eyes of the node, it makes no distinction between static and dynamic data, which makes things both simpler and more flexible. What we need from this building block is a way to compute  z z and the derivative of  L L with respect to each of the inputs. First of all, we need a function that computes

z=forward((x1,,xn)). z=forward((x1,…,xn)).

This is the simple part and should be trivial once you have decided what you want the node to do.

Next, computing the derivative of a single element of one of the inputs may look like (superscript omitted):

Lxi=jLzjzjxi ∂L∂xi=∑j∂L∂zj∂zj∂xi

We broke the derivative up using the multivariable chain rule (also known as the total derivative). It can also be written as

Lx=(dzdx)Lz[A×1=A×BB×1] ∂L∂x=(dzdx)⊺∂L∂z[RA×1=RA×BRB×1]

This assumes that the input size is  A A and the output size is  B B. The derivative  LzB ∂L∂z∈RB is something that needs to be given to the building block from the outside (this is the gradient being back-propagated). The Jacobian  dzdxB×A dzdx∈RB×A on the other hand needs to be defined by the node. However, we do not necessarily need to explicitly compute it or store it. All we need is to define the function

Lx=backward(x,z,Lz) ∂L∂x=backward(x,z,∂L∂z)

This would need to be done for each input separately. Since they sometimes share computations, frameworks like Caffe use a single function for the entire node’s backward computation. In our code examples, we will adopt this as well, meaning we will be defining:

(Lx1,,Lxn)=backward((x1,,xn),z,Lz) (∂L∂x1,…,∂L∂xn)=backward((x1,…,xn),z,∂L∂z)

It is also common to support multiple outputs, however for simplicity (and without loss of generality) we will assume there is only one.

Functions

So, the functions that we need to define for a single node is first the forward pass:

forward

The input data refers to all the inputs, so it will for instance be a list of arrays.

Next, the backward pass:

backward

It takes three inputs as described above and returns the gradient of the loss with respect to the input. It does not need to take the output data, since it can be computed from the input data. However, if it is needed, we might as well pass it in since we will have computed it already.

Forward / Backward pass

A DAG describes a partial ordering. First, we need to sort our nodes so that they do not violate the partial order. There will probably be several solutions to this, but we can pick one arbitrarily.

Once we have this ordering, we call forward on the list from the first node to the last. The order will guarantee that the dependencies of a node have been computed when we get to it. The Loss should be the last node. This is called a forward pass.

Then, we call backward on this list in reverse. This means that we start with the Loss node. Since we do not have any output diff at this point, we simply set it to an array of all ones. We proceed until we are done with the first in the list. This is called a backward pass.

Once the forward and the backward pass have been performed, we take the gradients that have arrived at each parameter node and perform a gradient descent update in the opposite direction.

Weight sharing

By externalizing the parameters, it makes parameter sharing conceptually easy to deal with. For instance, if we wanted to share weights (but not biases), we could do:

shared

In this case,  W W would receive two gradient arrays, in which case the sum is taken before performing the update step.

ReLU

As an example, the ReLU has a single input of the same size as the output, so  A=B A=B. The output is computed elementwise as

zi=max(0,xi) zi=max(0,xi)

which could translate to something like this in Python (who uses pseudo-code anymore?):

def forward(inputs):
    return np.maximum(inputs[0], 0)

For the backward pass, the Jacobian will be a diagonal matrix, with entries

zixi=1{xi>0}, ∂zi∂xi=1{xi>0},

where  1{P} 1{P} is 1 if the predicate  P P is true, and zero otherwise (see Iverson bracket). We can now write the gradient of the loss as

Lx=(dzdx)Lz=1{x>0}Lz, ∂L∂x=(dzdx)⊺∂L∂z=1{x>0}⊙∂L∂z,

where   denotes an elementwise product.

def backward(inputs, output, output_diff):
    return [(inputs[0] > 0) * output_diff]

Note that we have to return a list, since we could have multiple inputs.

Dense

Moving on to the dense (fully connected) layer where

z=Wx+b(B×1=B×AA×1+B×1) z=W⊺x+b(RB×1=RB×ARA×1+RB×1)

However, remember that we make no distinction between static and dynamic input, and from the point of view of our Dense node it simply looks like:

z=x2x1+x3 z=x2x1+x3

Which might translate to:

def forward(inputs):
    x, W, b = inputs
    return W.T @ x + b

For the backward pass, we need to compute all three Jacobians and multiply them by the gradient coming in from above. Let’s start with  x x:

dzdx=WB×A dzdx=W⊺∈RB×A

which gives us

Gradient #1 Lx=(dzdx)Lz=WLz Gradient #1 Gradient #1 →∂L∂x=(dzdx)⊺∂L∂z=W∂L∂z← Gradient #1

Moving on. Since  WA×B W∈RA×B, it means its Jacobian should have the dimensions  B×(A×B) B×(A×B). We know the bias will drop off, so we can write the output that we will be taking the Jacobian of as:

z=(j=1AWj,1xj,,j=1AWj,Bxj) z′=(∑j=1AWj,1xj,…,∑j=1AWj,Bxj)

Now, let’s compute the derivative of  zi zi′ (and thus  zi zi) with respect to  Wj,k Wj,k:

ziWj,k={xjif i=k 0otherwise ∂zi∂Wj,k={xjif i=k 0otherwise

With a bit of collapsing things together (Einstein notation is great for this, but the steps are omitted here), we get an outer product of two vectors

Gradient #2 LW=(dzdW)Lz=x(Lz) Gradient #2 Gradient #2 →∂L∂W=(dzdW)⊺∂L∂z=x(∂L∂z)⊺← Gradient #2

The final Jacobian is simply an identity matrix

dzdb=IB×B dzdb=I∈RB×B

so the Loss derivative with respect to the bias is just the gradients coming in from above unchanged

Gradient #3Lb=(dzdb)Lz=Lz Gradient #3 Gradient #3→∂L∂b=(dzdb)⊺∂L∂z=∂L∂z← Gradient #3

We thus have all three gradients (with no regard as to which ones are parameters). This might translate in code to:

def backward(inputs, output, output_diff):
    x, W, b = inputs
    return [
        W.T @ output_diff,
        np.outer(x, output_diff),
        output_diff,
    ]

Now, the frameworks that I know do not externalize the parameters, so instead of returning the two last gradients, they would be applied to the interal parameters through some other means. However, the main ideas and certainly the math will be exactly the same.

Loss

You should get the idea by now. The final note is that when we do this for the Loss layer, we still need to pretend the node has been placed in the middle of a network with an actual loss at the end of it. The Loss node should not be different in any way, except that its output size is scalar. However, a good loss node should theoretically be able to be used in the middle of a network, so it should still query output_diff and use it correctly (even though it will be all ones when used in the final position).

Summary

In summary, the usual steps when constructing a new node/layer is:

  • Compute the forward pass
  • Calculate the Jacobian for all your inputs (static and dynamic alike)
  • Multiply them with the gradient coming in from above. At this point, we will often realize that we do not have to ever store the entire Jacobian.

Creating an LMDB database in Python

LMDB is the database of choice when using Caffe with large datasets. This is a tutorial of how to create an LMDB database from Python. First, let’s look at the pros and cons of using LMDB over HDF5.

Reasons to use HDF5:

  • Simple format to read/write.

Reasons to use LMDB:

  • LMDB uses memory-mapped files, giving much better I/O performance.
  • Works well with really large datasets. The HDF5 files are always read entirely into memory, so you can’t have any HDF5 file exceed your memory capacity. You can easily split your data into several HDF5 files though (just put several paths to h5files in your text file). Then again, compared to LMDB’s page caching the I/O performance won’t be nearly as good.

LMDB from Python

You will need the Python package lmdb as well as Caffe’s python package (make pycaffe in Caffe). LMDB provides key-value storage, where each <key, value> pair will be a sample in our dataset. The key will simply be a string version of an ID value, and the value will be a serialized version of the Datum class in Caffe (which are built using protobuf).

import numpy as np
import lmdb
import caffe

N = 1000

# Let's pretend this is interesting data
X = np.zeros((N, 3, 32, 32), dtype=np.uint8)
y = np.zeros(N, dtype=np.int64)

# We need to prepare the database for the size. We'll set it 10 times
# greater than what we theoretically need. There is little drawback to
# setting this too big. If you still run into problem after raising
# this, you might want to try saving fewer entries in a single
# transaction.
map_size = X.nbytes * 10

env = lmdb.open('mylmdb', map_size=map_size)

with env.begin(write=True) as txn:
    # txn is a Transaction object
    for i in range(N):
        datum = caffe.proto.caffe_pb2.Datum()
        datum.channels = X.shape[1]
        datum.height = X.shape[2]
        datum.width = X.shape[3]
        datum.data = X[i].tobytes()  # or .tostring() if numpy < 1.9
        datum.label = int(y[i])
        str_id = '{:08}'.format(i)

        # The encode is only essential in Python 3
        txn.put(str_id.encode('ascii'), datum.SerializeToString())

You can also open up and inspect an existing LMDB database from Python:

import numpy as np
import lmdb
import caffe

env = lmdb.open('mylmdb', readonly=True)
with env.begin() as txn:
    raw_datum = txn.get(b'00000000')

datum = caffe.proto.caffe_pb2.Datum()
datum.ParseFromString(raw_datum)

flat_x = np.fromstring(datum.data, dtype=np.uint8)
x = flat_x.reshape(datum.channels, datum.height, datum.width)
y = datum.label

Iterating <key, value> pairs is also easy:

with env.begin() as txn:
    cursor = txn.cursor()
    for key, value in cursor:
        print(key, value)

Initialization of deep networks

As we all know, the solution to a non-convex optimization algorithm (like stochastic gradient descent) depends on the initial values of the parameters. This post is about choosing initialization parameters for deep networks and how it affects the convergence. We will also discuss the related topic of vanishing gradients.

First, let’s go back to the time of sigmoidal activation functions and initialization of parameters using IID Gaussian or uniform distributions with fairly arbitrarily set variances. Building deep networks was difficult because of exploding or vanishing activations and gradients. Let’s take activations first: If all your parameters are too small, the variance of your activations will drop in each layer. This is a problem if your activation function is sigmoidal, since it is approximately linear close to 0. That is, you gradually lose your non-linearity, which means there is no benefit to having multiple layers. If, on the other hand, your activations become larger and larger, then your activations will saturate and become meaningless, with gradients approaching 0.

Activation functions

Let us consider one layer and forget about the bias. Note that the following analysis and conclusion is taken from Glorot and Bengio[1]. Consider a weight matrix  WRm×n W∈Rm×n, where each element was drawn from an IID Guassian with variance  Var(W) Var(W). Note that we are a bit abusive with notation letting  W W denote both a matrix and a univariate random variable. We also assume there is no correlation between our input and our weights and both are zero-mean. If we consider one filter (row) in  W W, say  w w(a random vector), then the variance of the output signal over the input signal is:

Var(wTx)Var(X)=NnVar(wnxn)Var(X)=nVar(W)Var(X)Var(X)=nVar(W) Var(wTx)Var(X)=∑nNVar(wnxn)Var(X)=nVar(W)Var(X)Var(X)=nVar(W)

As we build a deep network, we want the variance of the signal going forward in the network to remain the same, thus it would be advantageous if  nVar(W)=1. nVar(W)=1. The same argument can be made for the gradients, the signal going backward in the network, and the conclusion is that we would also like  mVar(W)=1. mVar(W)=1. Unless  n=m, n=m, it is impossible to sastify both of these conditions. In practice, it works well if both are approximately satisfied. One thing that has never been clear to me is why it is only necessary to satisfy these conditions when picking the initialization values of  W. W. It would seem that we have no guarantee that the conditions will remain true as the network is trained.

Nevertheless, this Xavier initialization (after Glorot’s first name) is a neat trick that works well in practice. However, along came rectified linear units (ReLU), a non-linearity that is scale-invariant around 0 and does not saturate at large input values. This seemingly solved both of the problems the sigmoid function had; or were they just alleviated? I am unsure of how widely used Xavier initialization is, but if it is not, perhaps it is because ReLU seemingly eliminated this problem.

However, take the most competative network as of recently, VGG[2]. They do not use this kind of initialization, although they report that it was tricky to get their networks to converge. They say that they first trained their most shallow architecture and then used that to help initialize the second one, and so forth. They presented 6 networks, so it seems like an awfully complicated training process to get to the deepest one.

A recent paper by He et al.[3] presents a pretty straightforward generalization of ReLU and Leaky ReLU. What is more interesting is their emphasis on the benefits of Xavier initialization even for ReLU. They re-did the derivations for ReLUs and discovered that the conditions were the same up to a factor 2. The difficulty Simonyan and Zisserman had training VGG is apparently avoidable, simply by using Xavier intialization (or better yet the ReLU adjusted version). Using this technique, He et al. reportedly trained a whopping 30-layer deep network to convergence in one go.

Another recent paper tackling the signal scaling problem is by Ioffe and Szegedy[4]. They call the change in scale internal covariate shift and claim this forces learning rates to be unnecessarily small. They suggest that if all layers have the same scale and remain so throughout training, a much higher learning rate becomes practically viable. You cannot just standardize the signals, since you would lose expressive power (the bias disappears and in the case of sigmoids we would be constrained to the linear regime). They solve this by re-introducing two parameters per layer, scaling and bias, added again after standardization. The training reportedly becomes about 6 times faster and they present state-of-the-art results on ImageNet. However, I’m not certain this is the solution that will stick.

I reckon we will see a lot more work on this frontier in the next few years. Especially since it also relates to the – right now wildly popular – Recurrent Neural Network (RNN), which connects output signals back as inputs. The way you train such network is that you unroll the time axis, treating the result as an extremely deep feedforward network. This greatly exacerbates the vanishing gradient problem. A popular solution, called Long Short-Term Memory (LSTM), is to introduce memory cells, which are a type of teleport that allows a signal to jump ahead many time steps. This means that the gradient is retained for all those time steps and can be propagated back to a much earlier time without vanishing.

This area is far from solved, and until then I think I will be sticking to Xavier initialization. If you are using Caffe, the one take-away of this post is to use the following on all your layers:

weight_filler { 
    type: "xavier" 
}

References

  1. X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in International conference on artificial intelligence and statistics, 2010, pp. 249–256.

  2. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014. [pdf]

  3. K. He, X. Zhang, S. Ren, and J. Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” arXiv:1502.01852 [cs], Feb. 2015. [pdf]

  4. S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167 [cs], Feb. 2015. [pdf]

原文地址: http://deepdish.io/
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值