要看的几篇文章

本文汇总了多篇关于在线学习及优化算法的重要论文,包括自适应边界优化、子梯度方法等,并涉及ICML及NIPS等会议上的研究成果。这些算法提供了一种带有遗憾保证的有效在线算法,在多种度量标准下均适用。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

colt2010上: Adaptive Bound Optimization for Online Convex Optimization :http://www.colt2010.org/papers/104mcmahan.pdf

 

 

colt2010上: Adaptive Subgradient Methods for Online Learning and Stochastic Optimization:

http://www.colt2010.org/papers/023Duchi.pdf  

nips2009上: adaptive regularization of weight vectors

http://books.nips.cc/papers/files/nips22/NIPS2009_0611.pdf  IR/paper/2009/nips

 

想起来在哪找到的这几篇文章里,原来是订阅的machine learning里有人提到,正好duchi的文章之前也有看. 博主对这几篇文章的评价是These papers provide tractable online algorithms with regret guarantees over a family of metrics rather than just euclidean metrics. They look pretty useful in practice.

 

approx: a general approximation framework...

ICML 系列:

A Dual Coordinate Descent Method for Large-scale Linear SVM这篇要看,晚上看完.

 

A Quasi-Newton Approach to Nonsmooth Convex Optimization (2008)http://www.conflate.net/icml/paper/2008/461

 

Learning Diverse Rankings with Multi-Armed Bandits (2008)

http://www.conflate.net/icml/paper/2008/264

 

ManifoldBoost: Stagewise Function Approximation for Fully-, Semi- and Un-supervised Learning (2008)

http://www.conflate.net/icml/paper/2008/676

 

Multiple Instance Ranking (2008)

http://www.conflate.net/icml/paper/2008/552

 

Optimized Cutting Plane Algorithm for Support Vector Machines (2008)

http://www.conflate.net/icml/paper/2008/411

 

 

Stochastic Methods for L1 Regularized Loss Minimization (2009)

http://www.conflate.net/icml/paper/2009/262

 

 

 Boosting with Structural Sparsity (2009)

http://www.conflate.net/icml/paper/2009/146

 

BoltzRank: Learning to Maximize Expected Ranking Gain (2009)

http://www.conflate.net/icml/paper/2009/498

 

Interactively Optimizing Information Retrieval Systems as a Dueling Bandits Problem (2009)

http://www.conflate.net/icml/paper/2009/346

 

Learning Structural SVMs with Latent Variables (2009)

http://www.conflate.net/icml/paper/2009/420

 

 

 

 

 

提示:看过的文章一定要整理出来,以后看完整理分类的science blog上.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值