机器学习笔记之简化成本函数和梯度下降

本文介绍了一种简化版的成本函数及其在梯度下降算法中的应用。通过将两个条件情况压缩为一个统一公式,简化了逻辑回归的成本函数。此外,还详细介绍了梯度下降算法的实现方式,包括其向量化版本。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Simplified Cost Function and Gradient Descent

Note: [6:53 - the gradient descent equation should have a 1/m factor]

We can compress our cost function's two conditional cases into one case:

Cost(hθ(x),y)=ylog(hθ(x))(1y)log(1hθ(x))

Notice that when y is equal to 1, then the second term (1y)log(1hθ(x)) will be zero and will not affect the result. If y is equal to 0, then the first term ylog(hθ(x)) will be zero and will not affect the result.

We can fully write out our entire cost function as follows:

J(θ)=1mi=1m[y(i)log(hθ(x(i)))+(1y(i))log(1hθ(x(i)))]

A vectorized implementation is:

h=g(Xθ)J(θ)=1m(yTlog(h)(1y)Tlog(1h))

Gradient Descent

Remember that the general form of gradient descent is:

Repeat{θj:=θjαθjJ(θ)}

We can work out the derivative part using calculus to get:

Repeat{θj:=θjαmi=1m(hθ(x(i))y(i))x(i)j}

Notice that this algorithm is identical to the one we used in linear regression. We still have to simultaneously update all values in theta.

A vectorized implementation is:

θ:=θαmXT(g(Xθ)y⃗ )

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值