-
前向
-
均值
μng=∑i=1M(Xi)M(1) {\large \mathit{\color{Blue} \mu_{ng} = \frac{\sum_{i=1}^M(X^{i})}{M}} } \tag{1} μng=M∑i=1M(Xi)(1) -
方差
σng2=∑i=1M(Xi−μng)M(2) {\large \mathit{\color{Blue} \sigma_{ng}^2 = \frac{\sum_{i = 1}^{M}(X^i - \mu_{ng})}{M}}} \tag{2} σng2=M∑i=1M(Xi−μng)(2) -
归一化:
令
rsig=1σng2+ε(3) {\large \mathit{\color{Blue}{rsig = \frac{1}{\sqrt{\sigma_{ng}^2 + \varepsilon}}}}} \tag{3} rsig=σng2+ε1(3)
则:
Y=γ∗(X−μ)∗rsig+β=γ∗X∗rsig+β−γ∗μ∗rsig(4) {\large \mathit{\color{Blue} Y = \gamma * (X - \mu) * rsig + \beta = \gamma * X * rsig + \beta - \gamma * \mu * rsig}} \tag{4} Y=γ∗(X−μ)∗rsig+β=γ∗X∗rsig+β−γ∗μ∗rsig(4)
-
-
反向
令
S=γ∗rsig(5) {\large \mathit{\color{Blue} S = \gamma * rsig}} \tag{5} S=γ∗rsig(5)B=β−γ∗μ∗rsig(6) {\large \mathit{\color{Blue} B = \beta - \gamma * \mu * rsig}} \tag{6} B=β−γ∗μ∗rsig(6)
则
Y=S∗X+B {\large \mathit{\color{Blue}Y = S * X + B}} Y=S∗X+B
令
M=K×H×W(K=C/Group)(7) {\large \mathit{\color{Blue} M = K × H × W (K = C / Group)}} \tag{7} M=K×H×W(K=C/Group)(7)
由链式法则:
dLdX=dLdY∗dYdX=dLdY∗(d(S∗X)dX+dBdX)(8) {\large \mathit{\color{Blue} \frac{dL}{dX} = \frac{dL}{dY} * \frac{dY}{dX} = \frac{dL}{dY} * (\frac{d(S * X)}{dX} + \frac{dB}{dX})}} \tag{8} dXdL=dYdL∗dXdY=dYdL∗(dXd(S∗X)+dXdB)(8)
其中:d(S∗X)dX=S+X∗dSdX=S+X∗γ∗drsigdX(9) {\large \mathit{\color{Blue} \frac{d(S * X)}{dX} = S + X * \frac{dS}{dX} = S + X * \gamma * \frac{drsig}{dX}}} \tag{9} dXd(S∗X)=S+X∗dXdS=S+X∗γ∗dXdrsig(9)
dBdX=−γ∗μ∗drsigdX−γ∗rsig∗dμdX {\large \mathit{\color{Blue} \frac{dB}{dX} = -\gamma * \mu * \frac{drsig}{dX} - \gamma * rsig * \frac{d\mu}{dX}}} dXdB=−γ∗μ∗dXdrsig−γ∗rsig∗dXdμ
drsigdX=−rsig3∗(X−μ)M(10) {\large \mathit{\color{Blue} \frac{drsig}{dX} = -rsig^3 * \frac{(X -\mu)}{M}}} \tag{10} dXdrsig=−rsig3∗M(X−μ)(10)
dμdX=1M(11) {\large \mathit{\color{Blue}\frac{d\mu}{dX} = \frac{1}{M}}} \tag{11} dXdμ=M1(11)
由(5),(8)(9)(10)(11)得:
dLdX=dy∗(S+X∗γ∗rsig3∗(μ−X)M+γ∗μ∗rsig3∗(X−μ)M−γ∗rsigM)=dy∗S+dy∗γ∗rsig3∗(u−X)M∗(X−μ)−dy∗γ∗rsigM(12) {\large \mathit{\color{Blue} \frac{dL}{dX} = dy * (S + X * \gamma * rsig^3 * \frac{(\mu - X)}{M} + \gamma * \mu * rsig^3 * \frac{(X - \mu)}{M} - \frac{\gamma * rsig}{M})}} \\ {\large \mathit{\color{Blue} = dy * S + dy * \gamma * rsig^3 * \frac{(u - X)}{M} * (X - \mu) - dy * \frac{\gamma * rsig}{M}}}\tag{12} dXdL=dy∗(S+X∗γ∗rsig3∗M(μ−X)+γ∗μ∗rsig3∗M(X−μ)−Mγ∗rsig)=dy∗S+dy∗γ∗rsig3∗M(u−X)∗(X−μ)−dy∗Mγ∗rsig(12)
令
C1=S=γ∗rsigC2=dy∗γ∗rsig3∗μ−XMC3=−C2∗μ−dy∗γ∗rsigM(13) {\large \mathit{\color{Blue} C_1 = S = \gamma * rsig}} \\ {\large \mathit{\color{Blue} C_2 = dy * \gamma * rsig^3 * \frac{\mu - X}{M}}} \\ {\large \mathit{\color{Blue} C_3 = -C_2 * \mu - \frac{dy * \gamma * rsig}{M}}} \tag{13} C1=S=γ∗rsigC2=dy∗γ∗rsig3∗Mμ−XC3=−C2∗μ−Mdy∗γ∗rsig(13)
得:
dx=C1∗dy+C2∗X+C3(14) {\large \mathit{\color{Blue} dx = C_1 * dy + C_2 * X + C_3}} \tag{14} dx=C1∗dy+C2∗X+C3(14)
groupnorm_backward反向公式推导
最新推荐文章于 2025-07-07 00:28:00 发布
本文详细解释了卷积神经网络中前向传播计算均值、方差、归一化的过程,以及如何通过链式法则进行反向传播更新权重,涉及公式(1)至(14)。
1799

被折叠的 条评论
为什么被折叠?



