一个全连接ReLU神经网络,一个隐藏层,没有bias。用来从x预测y,使用L2 Loss。
这一实现完全使用numpy来计算前向神经网络,loss,和反向传播。
numpy ndarray是一个普通的n维array。它不知道任何关于深度学习或者梯度(gradient)的知识,也不知道计算图(computation graph),只是一种用来计算数学运算的数据结构。
# N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10
# Create random input and output data x = np.random.randn(N, D_in) y = np.random.randn(N, D_out)
# Randomly initialize weights w1 = np.random.randn(D_in, H) w2 = np.random.randn(H, D_out)
learning_rate = 1e-6 for t in range(500): # Forward pass: compute predicted y h = x.dot(w1) h_relu = np.maximum(h, 0) y_pred = h_relu.dot(w2)
# Compute and print loss loss = np.square(y_pred - y).sum() print(t, loss)
# Backprop to compute gradients of w1 and w2 with respect to loss
# loss = (y_pred - y) ** 2 grad_y_pred = 2.0 * (y_pred - y) # grad_w2 = h_relu.T.dot(grad_y_pred) grad_h_relu = grad_y_pred.dot(w2.T) grad_h = grad_h_relu.copy() grad_h[h < 0] = 0 grad_w1 = x.T.dot(grad_h)
# Update weights w1 -= learning_rate * grad_w1 w2 -= learning_rate * grad_w2 |
[Pytorch][转载]用numpy实现两层神经网络
最新推荐文章于 2021-10-01 16:57:50 发布