SMO原始论文解析(持续更新~)

本文介绍了一种名为SMO(Sequential Minimal Optimization)的新算法,该算法针对SVM(支持向量机)的训练过程进行了优化。相较于传统算法,SMO算法概念简单,易于实现,且在困难的SVM问题上具有更快的速度和更好的缩放特性。SMO算法通过使用解析QP步骤替代数值二次规划,有效解决了SVM训练速度慢的问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1、介绍 

1. INTRODUCTION
In the last few years, there has been a surge of interest in Support Vector Machines (SVMs) [19]
[20] [4]. SVMs have empirically been shown to give good generalization performance on a wide
variety of problems such as handwritten character recognition [12], face detection [15], pedestrian
detection [14], and text categorization [9].
However, the use of SVMs is still limited to a small group of researchers. One possible reason is
that training algorithms for SVMs are slow, especially for large problems. Another explanation is
that SVM training algorithms are complex, subtle, and difficult for an average engineer to
implement.
解析:svm很好用,但是训练太慢,不实用
This paper describes a new SVM learning algorithm that is conceptually simple, easy to
implement, is generally faster, and has better scaling properties for difficult SVM problems than
the standard SVM training algorithm. The new SVM learning algorithm is called Sequential
Minimal Optimization (or SMO). Instead of previous SVM learning algorithms that use
numerical quadratic programming (QP) as an inner loop, SMO uses an analytic QP step.
解析:本论文的smo算法能够解决这一问题

2、推理部分

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

ITIRONMAN

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值