Hinton's Dropout in 3 Lines of Python

本文介绍如何仅通过更改三行Python代码将Dropout技术应用于神经网络中。Dropout是一种防止过拟合的重要方法,通过随机关闭部分节点来提高网络泛化能力。文章提供了完整的代码示例,并解释了Dropout的工作原理及调参建议。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Hinton's Dropout in 3 Lines of Python

How to install Dropout into a neural network by only changing 3 lines of python.

Posted by iamtrask on July 28, 2015

Summary: Dropout is a vital feature in almost every state-of-the-art neural network implementation. This tutorial teaches how to install Dropout into a neural network in only a few lines of Python code. Those who walk through this tutorial will finish with a working Dropout implementation and will be empowered with the intuitions to install it and tune it in any neural network they encounter.

Followup Post: I intend to write a followup post to this one adding popular features leveraged by state-of-the-art approaches. I'll tweet it out when it's complete @iamtrask. Feel free to follow if you'd be interested in reading more and thanks for all the feedback!

Just Give Me The Code:

01. import numpy as np
02. = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ])
03. = np.array([[0,1,1,0]]).T
04. alpha,hidden_dim,dropout_percent,do_dropout = (0.5,4,0.2,True)
05. synapse_0 = 2*np.random.random((3,hidden_dim)) - 1
06. synapse_1 = 2*np.random.random((hidden_dim,1)) - 1
07. for in xrange(60000):
08. layer_1 = (1/(1+np.exp(-(np.dot(X,synapse_0)))))
09. if(do_dropout):
10. layer_1 *=np.random.binomial([np.ones((len(X),hidden_dim))],1-dropout_percent)[0* (1.0/(1-dropout_percent))
11. layer_2 = 1/(1+np.exp(-(np.dot(layer_1,synapse_1))))
12. layer_2_delta = (layer_2 - y)*(layer_2*(1-layer_2))
13. layer_1_delta = layer_2_delta.dot(synapse_1.T) * (layer_1 *(1-layer_1))
14. synapse_1 -= (alpha * layer_1.T.dot(layer_2_delta))
15. synapse_0 -= (alpha * X.T.dot(layer_1_delta))


width="728" height="90" frameborder="0" marginwidth="0" marginheight="0" vspace="0" hspace="0" allowtransparency="true" scrolling="no" allowfullscreen="true" id="aswift_0" name="aswift_0" style="box-sizing: border-box; left: 0px; position: absolute; top: 0px;">


Part 1: What is Dropout?

As discovered in the previous post, a neural network is a glorified search problem. Each node in the neural network is searching for correlation between the input data and the correct output data.

Consider the graphic above from the previous post. The line represents the error the network generates for every value of a particular weight. The low-points (READ: low error) in that line signify the weight "finding" points of correlation between the input and output data. The balls in the picture signify various weights. They are trying to find those low points.

Consider the color. The ball's initial positions are randomly generated (just like weights in a neural network). If two balls randomly start in the same colored zone, they will converge to the same point. This makes them redundant! They're wasting computation and memory! This is exactly what happens in neural networks.

Why Dropout: Dropout helps prevent weights from converging to identical positions. It does this by randomly turning nodes off when forward propagating. It then back-propagates with all the nodes turned on. Let’s take a closer look.


width="728" height="90" frameborder="0" marginwidth="0" marginheight="0" vspace="0" hspace="0" allowtransparency="true" scrolling="no" allowfullscreen="true" id="aswift_1" name="aswift_1" style="box-sizing: border-box; left: 0px; position: absolute; top: 0px;">


Part 2: How Do I Install and Tune Dropout?

The highlighted code above demonstrates how to install dropout. To perform dropout on a layer, you randomly set some of the layer's values to 0 during forward propagation. This is demonstrated on line 10.

Line 9: parameterizes using dropout at all. You see, you only want to use Dropout during training. Do not use it at runtime or on your testing dataset.

EDIT: Line 9: has a second portion to increase the size of the values being propagated forward. This happens in proportion to the number of values being turned off. A simple intuition is that if you're turning off half of your hidden layer, you want to double the values that ARE pushing forward so that the output compensates correctly. Many thanks to @karpathy for catching this one.

Tuning Best Practice

Line 4: parameterizes the dropout_percent. This affects the probability that any one node will be turned off. A good initial configuration for this for hidden layers is 50%. If applying dropout to an input layer, it's best to not exceed 25%.

Hinton advocates tuning dropout in conjunction with tuning the size of your hidden layer. Increase your hidden layer size(s) with dropout turned off until you perfectly fit your data. Then, using the same hidden layer size, train with dropout turned on. This should be a nearly optimal configuration. Turn off dropout as soon as you're done training and voila! You have a working neural network!



Want to Work in Machine Learning?

One of the best things you can do to learn Machine Learning is to have a job where you're practicing Machine Learning professionally. I'd encourage you to check out the positions at Digital Reasoning in your job hunt. If you have questions about any of the positions or about life at Digital Reasoning, feel free to send me a message on my LinkedIn. I'm happy to hear about where you want to go in life, and help you evaluate whether Digital Reasoning could be a good fit.

If none of the positions above feel like a good fit. Continue your search! Machine Learning expertise is one of the most valuable skills in the job market today, and there are many firms looking for practitioners. Perhaps some of these services below will help you in your hunt.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值