深度学习学习笔记——C2W3-12——TensorFlow

TensorFlow

Hi and welcome back. There are few deep learning program frameworks that can help you be much more efficient in how you develop and use deep learning algorithms. One of these frameworks is TensorFlow. What I hope to do in this video is step through with you the basic structure of a TensorFlow program so that you know how you could use TensorFlow to implement such programs, implements neural networks yourself. Then after this video, I'll leave you to dive into some more of the details and gain practice programming with TensorFlow in this week's program exercise. This week's program exercise does require a law extra time. Please do plan or budget for a little bit more time to complete it. As a motivating problem, let's say that you have some cost function J that you want to minimize. For this example, I'm going to use this highly simple cost function, J of w equals w squared minus 10w plus 25. That's the cost function. You might notice that this function is actually w minus five squared. If you expand out this quadratic, you get the expression above. The value of w that minimizes this, is w equals five. But let's say we didn't know that, and you just have this function.

Let us see how you can implement something in TensorFlow to minimize this. Because a very similar structure, a program can be used to train neural networks where you can have some complicated cost function J of wb depending on all the parameters of your neural network. Then similarly, you build a use TensorFlow to automatically try to find values of w and b that minimize this cost function, but let's start with the simpler example on the left. Here I am in Python in my Jupyter Notebook. In order to startup TensorFlow, you type import NumPy or NumPy as NP, import TensorFlow as TF. This is idiomatic. This is what pretty much everyone tags exactly to import TensorFlow as TF. Next thing you want to do is define the parameter W. Intensive though you're going to use tf.variable to signify that this is a variable initialize it to zero, and the type of the variable is a floating point number, dtype equals tf. float 32, says a TensorFlow floating-point number.

Next, let's define the optimization algorithm you're going to use. In this case, the Adam optimization algorithm, optimizing equals tf.keras.optimizers.Adam. Let's set the learning rate to 0.1. Now we can define the cost function. Remember the cost function was w squared minus 10w plus 25. Certainly write that down. The cost is w squared minus 10w plus 25. The great thing about TensorFlow is you only have to implement forward prop, that is you only have to write the code to compute the value of the cost function. TensorFlow can figure out how to do the backprop or do the gradient computation. One way to do this is to use gradient tape. Let me show you the syntax with tf.Gra

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值