[NOTE in progress] Distributed Optimization and Statistical Learning via ADMM - Boyd

本文是关于Boyd等人所著的“分布式优化与统计学习通过ADMM”的阅读笔记。ADMM起源于70年代,与Douglas-Rachford分裂法等方法密切相关。随着大数据时代的到来,ADMM因其适用于大规模优化问题的分布式解决而备受关注。ADMM结合了双分解和增广拉格朗日方法,尤其适合于特征或样本的并行分解。尽管在纯串行模式下,ADMM的收敛速度也较快,通常在几十次迭代后达到满意精度。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Reading notes of the paper "Distributed Optimization and Statistical Learning via ADMM" by Boyd, Parikh, Chu, Peleato and Eckstein.

Introduction

  • ADMM : developped in the 70s with roots in the 50s. Proved to be highly related to other methods like Douglas-Rachford splitting, Spingarn's method of partial inverse, Proximal methods, etc
  • Why ADMM today: with the arriving of the big data era and the need of ML algorithms, ADMM is proved to be well suited to solve large scale optimization problems, distributionally. 
  • What big data brings to us: with big data, simple methods can be shown as very effective to solve complex pb
  • ADMM can be seen as a blend of Dual Decomposition and Augmented Lagrangian Methods. The latter is more robust and has a better convergence but cannot be decompose directly as in DD.
  • ADMM can decompose by example or by features. [To be explored in later chapters]
  • Note that even used in serial mode, ADMM is still comparable to others methods and often converge in tens of iterations.

Precursors

  • What is conjugate function exactly?
  • Dual ascent and Dual subgradient methods. If the stepsize is chosen appropriately and some other assumptions hold. They converge.
  • Why augemented lagrangian:
    • More robust, less assumption(strict convexity, finiteness of f) : in pratice some convergence assumptions are not met for dual ascent, the constraint may be affine (e.x. Min x s.t. x>10) and the dual pb become unbounded.
    • For equality constraints, augmented version has a faster convergence. This can be viewed from the penalty method's point of view.
  • Dual Decomposition: relax the connecting contraints so that the pb can be decomposed. This naturally invovles parallel computation.
  • The pho in Augmented Lag is actually the ste
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值