deep learning 专项课程二 dropout在代码中的使用

本文详细介绍了一种常用的正则化方法——Dropout,并通过具体的Python代码实现了带有Dropout的前向传播和反向传播过程。该方法通过随机关闭部分神经元来减少过拟合现象。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

在forward_propagation中

def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):  
    np.random.seed(1)
    # retrieve parameters
    W1 = parameters["W1"]
    b1 = parameters["b1"]
    W2 = parameters["W2"]
    b2 = parameters["b2"]
    W3 = parameters["W3"]
    b3 = parameters["b3"]
    # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
    Z1 = np.dot(W1, X) + b1
    A1 = relu(Z1)
    #二项分布的实现
    D1 = np.random.binomial(n=1,p=keep_prob,size=A1.shape)
    A1 = np.multiply(A1,D1)                                     
    A1 = A1/keep_prob        

    Z2 = np.dot(W2, A1) + b2
    A2 = relu(Z2)

    D2 = np.random.binomial(n=1,p=keep_prob,size=A2.shape)                                        
    A2 = np.multiply(A2,D2)                                   
    A2 = A2/keep_prob      

    Z3 = np.dot(W3, A2) + b3
    A3 = sigmoid(Z3)
    cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
    return A3, cache

在back_propagation

def backward_propagation_with_dropout(X, Y, cache, keep_prob):
     m = X.shape[1]
    (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
    dZ3 = A3 - Y
    dW3 = 1./m * np.dot(dZ3, A2.T)
    db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
    dA2 = np.dot(W3.T, dZ3)

    dA2 = np.multiply(dA2,D2)   
    dA2 = dA2/keep_prob 

    dZ2 = np.multiply(dA2, np.int64(A2 > 0))
    dW2 = 1./m * np.dot(dZ2, A1.T)
    db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
    dA1 = np.dot(W2.T, dZ2)

    dA1 = np.multiply(dA1,D1)
    dA1 = dA1/keep_prob

    dZ1 = np.multiply(dA1, np.int64(A1 > 0))
    dW1 = 1./m * np.dot(dZ1, X.T)
    db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
    gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
                 "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
                 "dZ1": dZ1, "dW1": dW1, "db1": db1}
    return gradients
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值