2016.03.30 Supervised learning

本文探讨了贝叶斯推断中最大后验估计的优势及其方差降低特性,同时介绍了核技巧在非线性模型学习中的强大能力及其实现效率。此外,文章还分析了核机器的主要缺点,包括决策函数评估成本高和训练计算成本大等问题,并对比了近邻算法的高容量特性。

1.As with full Bayesian inference, MAP Bayesian inference has the advantage of leveraging information that is brought by the prior and cannot be found in the training data. This additional information helps to reduce the variance in the MAP point estimate (in comparison to the ML estimate). However, it does so at the price of increased bias.


MAP估计器相比于MLE估计器能降低估计器的方差,使得估计器分布更集中,但由于MAP估计引入了偏好,使得估计的偏增加。



2.The power of kernel trick


The kernel trick is powerful for two reasons. First, it allows us to learn models that are nonlinear as a function of x using convex optimization techniques that are guaranteed to converge efficiently. This is possible because we consider φ fixed and optimize only α, i.e., the optimization algorithm can view the decision function as being linear in a different space. Second, the kernel function k often admits an implementation that is significantly more computational efficient than naively constructing two φ(x) vectors and explicitly taking their dot product.


SVM并不是唯一使用kernel trick的算法,有许多算法通过kernel trick从线性算法推广到非线性算法。所有采用kernel trick的算法统称kernel methods。

3.Kernel methods的主要缺点

A major drawback to kernel machines is that the cost of evaluating the decision function is linear in the number of training examples, because the i-th example contributes a term αik(x, x(i)) to the decision function. Support vector machines are able to mitigate this by learning an α vector that contains mostly zeros.Classifying a new example then requires evaluating the kernel function only for the training examples that have non-zero αi. These training examples are known as support vectors.


Kernel machines also suffer from a high computational cost of training when the dataset is large. We will revisit this idea in Sec. 5.9. Kernel machines with generic kernels struggle to generalize well. We will explain why in Sec. 5.11. The modern incarnation of deep learning was designed to overcome these limitations of kernel machines. The current deep learning renaissance began when Hinton et al.(2006) demonstrated that a neural network could outperform the RBF kernel SVM on the MNIST benchmark.


4.关于K近邻算法


As a non-parametric learning algorithm,k-nearest neighbor can achieve very high capacity. For example,suppose we have a multiclass classification task and measure performance with 0-1 loss. In this setting, 1-nearest neighbor converges to double the Bayes error as the number of training examples approaches infinity. The error in excess of the Bayes error results from choosing a single neighbor by breaking ties between equally distant neighbors randomly. When there is infinite training data, all test points x will have infinitely many training set neighbors at distance zero. If we allow the algorithm to use all of these neighbors to vote, rather than randomly choosing one of them, the procedure converges to the Bayes error rate. The high capacity of k-nearest neighbors allows it to obtain high accuracy given a large training set.




### 监督学习的概念与视频教程推荐 监督学习(Supervised Learning)是一种机器学习方法,其中模型通过已知的输入数据和对应的输出标签进行训练[^1]。目标是让模型能够从训练数据中学习到输入与输出之间的映射关系,并在面对新的未知数据时能够准确预测输出。 在监督学习中,数据集通常由一组特征向量和对应的标签组成。例如,在分类任务中,标签通常是离散值;而在回归任务中,标签则是连续值[^2]。模型通过最小化损失函数来调整参数,以提高预测准确性。 以下是几个推荐的监督学习相关视频教程资源: 1. **Andrew Ng 的 Machine Learning 课程** Andrew Ng 在 Coursera 上提供的 Machine Learning 课程涵盖了监督学习的基础概念,包括线性回归、逻辑回归以及神经网络等内容[^3]。 视频链接:[Coursera - Machine Learning by Andrew Ng](https://www.coursera.org/learn/machine-learning) 2. **深度学习入门 - LISA Lab 教程** 来自蒙特利尔大学 LISA Lab 的教程提供了关于深度学习的实用手册和软件指南,其中包括监督学习的相关内容[^2]。虽然主要针对深度学习,但其基础部分非常适合初学者理解监督学习的核心思想。 视频链接:[LISA Lab Tutorials](http://deeplearning.net/tutorial/) 3. **StatQuest 的监督学习系列视频** StatQuest 是一个专注于统计学和机器学习的 YouTube 频道,提供了一系列简单易懂的视频教程,解释了监督学习的基本原理及其应用[^4]。 视频链接:[StatQuest - Supervised Learning Playlist](https://www.youtube.com/results?search_query=statquest+supervised+learning) ```python # 示例代码:简单的监督学习实现(线性回归) import numpy as np from sklearn.linear_model import LinearRegression # 构建训练数据 X = np.array([[1], [2], [3], [4], [5]]) y = np.array([2, 4, 6, 8, 10]) # 训练模型 model = LinearRegression() model.fit(X, y) # 预测新数据 new_data = np.array([[6]]) prediction = model.predict(new_data) print(f"Prediction for input {new_data.flatten()}: {prediction[0]}") ```
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值