1. Introduction:
- Linear conjugate gradient method: solve a large linear system of equations
- Nonlinear conjugate gradient method: adaption of linear CG for nonlinear optimization
- Key features: requires no matrix storage, faster than steepest descent
- Assume \(A\) is an \(n\times n\) symmetric pd matrix, \(\phi(x)=\frac{1}{2}x^TAx-b^Tx\) and \(\nabla\phi(x)=Ax-b=r(x)\) residual
- Idea: steepest descent (1 to \(\infty\) iterations), coordinate descent (n or \(\infty\) iterations), Newton's method (1 iteration), CG(n iterations)
转载于:https://www.cnblogs.com/cihui/p/6860582.html