机器学习简单回顾:1 linear regression

本文探讨了最大似然估计(MLE)的概念及其在机器学习中的应用,特别是如何通过贝叶斯方法避免过拟合的问题。文章详细解析了MLE的数学原理,并介绍了使用正则化技术来控制模型复杂度的方法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

比较久之前看的机器学习了,因为做的深度学习方向,所以回顾一下知识点

Finding the model parameter is a specific condition of Maximum Likelihood Estimation which is known as MLE:
And overfitting is the general property of MLE. By using a bayesian method, we can easily avoid overfitting.
(indeed, effective number of parameters will be changed while the size of datasets changes)

...\\...

MLE:
y(i)=θTx(i)+ϵ(i) y^{(i)} = \theta^T x^{(i)}+\epsilon^{(i)} y(i)=θTx(i)+ϵ(i)

assume that ϵ\epsilonϵ is i.i.d.
we got the likelihood as below
p(y(i)∣x(i);θ)=12πσexp(−(y(i)−θTx(i))22σ2) p(y^{(i)}|x^{(i)};\theta) = \frac{1}{\sqrt{2\pi}\sigma}exp\left(-\frac{(y^{(i)}-\theta^T x^{(i)})^2}{2\sigma^2}\right) p(y(i)x(i);θ)=2πσ1exp(2σ2(y(i)θTx(i))2)
maximize the likelihood
argmaxL(θ)=argmax   p(y⃗∣X;θ)=argmax   ∏i=1n12πσexp(−(y(i)−θTx(i))22σ2) \begin{aligned} argmax L(\theta) &=argmax\,\,\, p(\vec y | X;\theta) \\&=argmax\,\,\, \prod^n_{i=1}\frac{1}{\sqrt{2\pi}\sigma}exp\left(-\frac{(y^{(i)}-\theta^T x^{(i)})^2}{2\sigma^2}\right) \end{aligned} argmaxL(θ)=argmaxp(yX;θ)=argmaxi=1n2πσ1exp(2σ2(y(i)θTx(i))2)
actually log L(θ)L(\theta)L(θ) is more fit for calculating:
l(θ)=logL(θ)=log∏i=1n12πσexp(−(y(i)−θTx(i))22σ2)=∑i=1nlog12πσexp(−(y(i)−θTx(i))22σ2)=nlog12πσ−1σ2⋅12∑i=1n((y(i)−θTx(i))2 \begin{aligned} l(\theta) &= log L(\theta) = log \prod^n_{i=1}\frac{1}{\sqrt{2\pi}\sigma}exp\left(-\frac{(y^{(i)}-\theta^T x^{(i)})^2} {2\sigma^2}\right) \\ & = \sum^n_{i=1}log\frac{1}{\sqrt{2\pi}\sigma}exp\left(-\frac{(y^{(i)}-\theta^T x^{(i)})^2}{2\sigma^2}\right) \\ & = nlog\frac{1}{{\sqrt{2\pi}\sigma}} - \frac{1}{\sigma^2} \cdot \frac{1}{2}\sum^n_{i=1}((y^{(i)}-\theta^T x^{(i)})^2 \end{aligned} l(θ)=logL(θ)=logi=1n2πσ1exp(2σ2(y(i)θTx(i))2)=i=1nlog2πσ1exp(2σ2(y(i)θTx(i))2)=nlog2πσ1σ2121i=1n((y(i)θTx(i))2

substitute for newton method could be

sklearn linear_model
Parameters

fit_interceptbool, optional, default True
Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered).

normalizebool, optional, default False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use sklearn.preprocessing.StandardScaler before calling fit on an estimator with normalize=False.

copy_Xbool, optional, default True
If True, X will be copied; else, it may be overwritten.

n_jobsint or None, optional (default=None)
The number of jobs to use for the computation. This will only provide speedup for n_targets > 1 and sufficient large problems. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.

Attributes

coef_array of shape (n_features, ) or (n_targets, n_features)
Estimated coefficients for the linear regression problem. If multiple targets are passed during the fit (y 2D), this is a 2D array of shape (n_targets, n_features), while if only one target is passed, this is a 1D array of length n_features.

rank_int
Rank of matrix X. Only available when X is dense.

singular_array of shape (min(X, y),)
Singular values of X. Only available when X is dense.

intercept_float or array of shape of (n_targets,)
Independent term in the linear model. Set to 0.0 if fit_intercept = False.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值