吴恩达机器学习笔记(七)--Logistic回归

本文探讨了吴恩达机器学习课程中的Logistic回归概念,解释了为何线性回归不适用于分类问题,并引入了Sigmoid函数来解决0到1之间的概率预测问题。通过Logistic回归,我们可以有效地区分如邮件是否为垃圾邮件、交易是否欺诈等二元分类任务。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

吴恩达机器学习笔记(七)–Logistic回归

学习基于:吴恩达机器学习

1. Classification

  • The classification problem is just like the regression problem, except that the values we now want to predict take on only a small number of discrete values.
  • Linear regression doesn’t work well here because classification is not actually a linear function.
classification problems:
E-mail: spam / Not spam ?
Online transactions: Fraudulent or not ?
Tumor: Malignant / Benign ?

在这里插入图片描述

2. Hypothesis Representation

We will change the form for our hypotheses ??(x) to satisfy 0≤ ??(x)≤1. These are Sigmoid Function, also called Logistic function.

  • h θ ( x ) = g ( θ T x ) h_{\theta}(x) = g(\theta^{T}x) hθ(x)=g(θTx)
  • z = θ T x z = \theta^Tx z=θTx
  • g ( z ) = 1 1 − e z g(z) = \frac{1}{1-e^z} g(z)=1ez1

The following image shows us what the sigmoid function looks like:
在这里插入图片描述
The function g(z), shown here, maps any real number to the (0, 1) interval, making it useful for transforming an arbitrary-valued function into a function better suited for classification.

??(x) will give us the probability that our output is 1. For example, ??(x)=0.7 gives us a probability of 70% that our output is 1.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值