
convex optimization
文章平均质量分 72
艳艳儿
这个作者很懒,什么都没留下…
展开
-
[First order method] Gradient descent tools
Lipschitz gradientStrong continuousCo-coercivity of gradient1 Lipschitz of gradient2 Strong convex1. Lipschitz gradientIf ∇f(x)\nabla f(x) L-is lipschitz continuous, then we have 12L∥∇f(x)∥22≤f(x)原创 2016-01-20 12:21:37 · 788 阅读 · 3 评论 -
[Help] Proximal mapping
Properties of Proximal mapping1 L1 Lipschitz and monotone2 Projection Property3 Scaling and translation argument1. Properties of Proximal mappingFor a convex function h(x)h(x), its proximal mapping原创 2016-02-18 03:02:18 · 2055 阅读 · 0 评论 -
[First order method] Gradient descent
gradient descent1 Model to consider2 Interpretaion21 Interpretation via newton method22 Interpretation via quadratic approximation of original function3 How to choose step size tktk4 Convergence原创 2016-01-19 05:45:54 · 1457 阅读 · 0 评论 -
[first order method] Proximal Gradient Descent
Proximal gradient descentExampleProperties of Proximal mapping1 Proximal gradient descentUnconstrained Problem with cost function split into two components. minf(x)=g(x)+h(x)\min f(x)=g(x)+h(x) whe原创 2016-02-07 06:25:09 · 2285 阅读 · 0 评论 -
[R] Proximal Gradient Descend for Lasso
This is a short code for studying proximal gradient descent algorithm.#--------------------------- functions to be used ----------------------## the main function f = g + hf <- function(x, A, b, lambd原创 2016-03-21 22:50:42 · 2458 阅读 · 1 评论 -
[R] ADMM for lasso
This is a short code for studying admm for lasso.#--------------------------- functions to be used ----------------------## the main function f = g + hf <- function(x, A, b, lambda){ 1/2*norm(原创 2016-03-22 04:40:20 · 3788 阅读 · 0 评论