CalTech machine learning, video 11 note(overfitting)

本文深入探讨了机器学习中过拟合现象的本质及其影响,包括噪声的作用、不同目标复杂度和数据集大小的影响。通过实例展示了如何通过正则化和验证来预防过拟合,确保模型在未知数据上的泛化能力。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

11:14 2014-10-07
start CalTech machine learning,video 11


overfitting


11:14 2014-10-07
review:


* Multilayer perceptrons


* Neural networks


* Backpropagation


11:16 2014-10-07
overfitting, regularization, validation


11:20 2014-10-07
outline:


* What is overfitting


* The role of noise


* Deterministic noise


* Dealing with overfitting


11:23 2014-10-07
simple target function


11:28 2014-10-07
we're going to generate 5 point from the target 


function to learn from.


11:29 2014-10-07
the target disappears, and you have 5 points to fit


11:30 2014-10-07
What is overfitting?


4th-order polynomial fit, Ein = 0, Eout is huge!!!


11:33 2014-10-07
you're exploring more & more the space of weights


11:53 2014-10-07
the total number of free parameters in the model


11:54 2014-10-07
neural network fitting noisy data


11:55 2014-10-07
overfitting occurs when you fitting Ein, Eout goes up!


11:55 2014-10-07
generalization error


11:56 2014-10-07
simply stop at that point


12:03 2014-10-07
early stopping


12:03 2014-10-07
overfitting happens when you compare 2 things,


whether the 2 things are 2 different models or 


2 instances within the same model,


12:05 2014-10-07
if there was overfitting, we better detect it, and 


stop earlier


12:05 2014-10-07
overfitting: fitting the data more than it's warranted.


culprit: fitting the noise


12:14 2014-10-07
fitting the noise is the cost of doing business


12:15 2014-10-07
it's taken you away from the correct solution


12:16 2014-10-07
so let's say, I'm going to generating 15 points in this case


12:17 2014-10-07
Ein  // in-sample error


Eout // out-of-sample error


12:26 2014-10-07
now let's apply the 10th order fit


12:33 2014-10-07
and what is the out-of-sample error(Eout)?


just terrible


12:34 2014-10-07
you're actually fitting the noise


12:34 2014-10-07
getting that notion down is very important


12:36 2014-10-07
where are you going to get overfitting?


you could facing a completely noiseless in the 


conventional sense situation, and yet there is 


overfitting because you're fitting another type 


of noise.


12:38 2014-10-07
noisy simple target


noiseless higher-order target


12:38 2014-10-07
if I told you what the target is, this is not machine learning


12:39 2014-10-07
choose your model


12:39 2014-10-07
you match the "data resources" rather than the "target complexity"


12:41 2014-10-07
in this case, you're looking at the generalization issues,


you know the generalization issues depend on the size & quality


of the data set.


12:42 2014-10-07
overfitting even without noise


12:58 2014-10-07
impact of "noise level" & "target complexity"


13:02 2014-10-07
y = f(x) + ε(x) // target function + noise


13:02 2014-10-07
factors that affect overfitting:


* noise level


* target complexity


* data set size


13:08 2014-10-07
we're fitting a data set with 2 models


13:10 2014-10-07
so you get a pattern for what is going on


13:14 2014-10-07
as I increase the noise level, overfitting worsens


13:15 2014-10-07
as you increase the number of points, the overfitting get down


13:16 2014-10-07
let's look at the impact of Qf(target complexity)


13:17 2014-10-07
overfitting error = f(noise level, target complexity, number of points)


13:18 2014-10-07
there are 2 things that you can derive from these 2 figures


13:19 2014-10-07
impact of noise:


* stochastic noise    // noise level


* deterministic noise // target complexity


13:21 2014-10-07
I get more overfitting as I get more deterministic noise 


13:22 2014-10-07
Defn of deterministic noise:


The part of f that H cannot capture


// f == target function


// H == Hypothese set


13:23 2014-10-07
it's the part of target that your hypothese cannot capture


13:24 2014-10-07
you're still not going to get f even try your best,


because your hypothese is limited.


13:25 2014-10-07
it cannot be captured because I'm limited in capturing.


13:25 2014-10-07
out of my league


13:26 2014-10-07
why we calling it noise?


13:26 2014-10-07
because their hypothese set is so limited.


13:27 2014-10-07
you're better off just killing that part, and give


them a simple thing they can learn; because the additional


part will mislead them.


13:29 2014-10-07
so that's why it is called noise.


13:30 2014-10-07
noise:


* stochastic noise


* deterministic noise


13:34 2014-10-07
it is out of your ability.


13:42 2014-10-07
that tells me the bias my hypothese is from the target


13:43 2014-10-07
noise & Bias-Variance decomposition


y = f(x) + ε(x) // what if we add noise?


13:44 2014-10-07
Actually, two noise term


var + bias(deterministic noise) + energy of noise(stochastic noise)


13:49 2014-10-07
your hypothese => centroid => target proper => actual output


13:50 2014-10-07
look at this decomposition, 


13:54 2014-10-07
How do we deal with overfitting? // 2 cures


* regularization // putting the brakes


* validation     //


13:56 2014-10-07
putting the brakes: the regularization part


13:56 2014-10-07
the amount of brake I'm going to put here is so minimal,


13:58 2014-10-07
that little bit of brake will result in this...


totally dramatic, fantastic fit


13:58 2014-10-07
free fit => restrained fit  // regularization(some brake)


13:59 2014-10-07
we don't have to do much to prevent the overfitting


13:59 2014-10-07
but we need to understand what is regularization & 


how to choose it etc.


14:00 2014-10-07
validation is the other prescription
内容概要:《中文大模型基准测评2025年上半年报告》由SuperCLUE团队发布,详细评估了2025年上半年中文大模型的发展状况。报告涵盖了大模型的关键进展、国内外大模型全景图及差距、专项测评基准介绍等。通过SuperCLUE基准,对45个国内外代表性大模型进行了六大任务(数学推理、科学推理、代码生成、智能体Agent、精确指令遵循、幻觉控制)的综合测评。结果显示,海外模型如o3、o4-mini(high)在推理任务上表现突出,而国内模型如Doubao-Seed-1.6-thinking-250715在智能体Agent和幻觉控制任务上表现出色。此外,报告还分析了模型性价比、效能区间分布,并对代表性模型如Doubao-Seed-1.6-thinking-250715、DeepSeek-R1-0528、GLM-4.5等进行了详细介绍。整体来看,国内大模型在特定任务上已接近国际顶尖水平,但在综合推理能力上仍有提升空间。 适用人群:对大模型技术感兴趣的科研人员、工程师、产品经理及投资者。 使用场景及目标:①了解2025年上半年中文大模型的发展现状与趋势;②评估国内外大模型在不同任务上的表现差异;③为技术选型和性能优化提供参考依据。 其他说明:报告提供了详细的测评方法、评分标准及结果分析,确保评估的科学性和公正性。此外,SuperCLUE团队还发布了多个专项测评基准,涵盖多模态、文本、推理等多个领域,为业界提供全面的测评服务。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值