第二课:Improving Deep Neural Networks 第一周:编程作业: Gradient Checking

第二门课 改善深层神经网络:超参数调试、正则化以及优化(Improving Deep Neural Networks:Hyperparameter tuning, Regularization and Optimization)

第一周:编程作业: Gradient Checking

本周课程笔记见:第一周:深度学习的实践层面(Practical aspects of Deep Learning)

Gradient Checking

Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.

You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud–whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user’s account has been taken over by a hacker.

But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company’s CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, “Give me a proof that your backpropagation is actually working!” To give this reassurance, you are going to use “gradient checking”.

Let’s do it!

# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector

1) How does gradient checking work?

Backpropagation computes the gradients ∂ J ∂ θ \frac{\partial J}{\partial \theta} θJ, where θ \theta θ denotes the parameters of the model. J J J is computed using forward propagation and your loss function.

Because forward propagation is relatively easy to implement, you’re confident you got that right, and so you’re almost 100% sure that you’re computing the cost J J J correctly. Thus, you can use your code for computing J J J to verify the code for computing ∂ J ∂ θ \frac{\partial J}{\partial \theta} θJ.

Let’s look back at the definition of a derivative (or gradient):
(1) ∂ J ∂ θ = lim ⁡ ε → 0 J ( θ + ε ) − J ( θ − ε ) 2 ε \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1} θJ=ε0lim2εJ(θ+ε)J(θε)(1)

If you’re not familiar with the “ lim ⁡ ε → 0 \displaystyle \lim_{\varepsilon \to 0} ε0lim” notation, it’s just a way of saying “when ε \varepsilon ε is really really small.”

We know the following:

  • ∂ J ∂ θ \frac{\partial J}{\partial \theta} θJ is what you want to make sure you’re computing correctly.
  • You can compute J ( θ + ε ) J(\theta + \varepsilon) J(θ+ε) and J ( θ − ε ) J(\theta - \varepsilon) J(θε) (in the case that θ \theta θ is a real number), since you’re confident your implementation for J J J is correct.

Lets use equation (1) and a small value for ε \varepsilon ε to convince your CEO that your code for computing ∂ J ∂ θ \frac{\partial J}{\partial \theta} θJ is correct!

2) 1-dimensional gradient checking

Consider a 1D linear function J ( θ ) = θ x J(\theta) = \theta x J(θ)=θx. The model contains only a single real-valued parameter θ \theta θ, and takes x x x as input.

You will implement code to compute J ( . ) J(.) J(.) and its derivative ∂ J ∂ θ \frac{\partial J}{\partial \theta} θJ. You will then use gradient checking to make sure your derivative computation for J J J is correct.

在这里插入图片描述

Figure 1 : 1D linear model

The diagram above shows the key computation steps: First start with x x x, then evaluate the function J ( x ) J(x)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值