自动优化:深度学习框架的进阶之路
在深度学习的领域中,训练神经网络是一项核心任务。传统上,我们需要手动编写反向传播逻辑,这不仅繁琐,还容易出错。而现在,自动求导(autograd)系统的出现,极大地简化了这一过程。
手动反向传播的繁琐
首先,让我们看看手动进行反向传播的情况。以下是一个简单的神经网络手动反向传播的代码示例:
import numpy
np.random.seed(0)
data = np.array([[0,0],[0,1],[1,0],[1,1]])
target = np.array([[0],[1],[0],[1]])
weights_0_1 = np.random.rand(2,3)
weights_1_2 = np.random.rand(3,1)
for i in range(10):
layer_1 = data.dot(weights_0_1)
layer_2 = layer_1.dot(weights_1_2)
diff = (layer_2 - target)
sqdiff = (diff * diff)
loss = sqdiff.sum(0)
layer_1_grad = diff.dot(weights_1_2.transpose())
weight_1_2_update = layer_1.transpose().dot(diff)
weight_0_1_update = data.transpose().dot(layer_1_grad)
weights_1_2 -= weight_1_2_update
超级会员免费看
订阅专栏 解锁全文
1448

被折叠的 条评论
为什么被折叠?



