优化算法:原理、实现与应用
在科学研究和工程实践中,优化问题无处不在。从寻找函数的最大值或最小值,到解决复杂的约束优化问题,优化算法发挥着至关重要的作用。本文将介绍几种常见的优化算法,包括 Shubert–Piyavskii 算法、最速下降法、牛顿法、共轭梯度法和拟牛顿法,并给出相应的 MATLAB 实现和示例。
1. Shubert–Piyavskii 算法
Shubert–Piyavskii 算法是一种用于寻找函数最大值或最小值的全局优化算法。下面通过一个具体的例子来说明如何使用该算法。
问题描述 :使用 Shubert–Piyavskii 方法寻找函数 $f(x) = -\sin(1.2x) - \sin(3.5x)$ 在区间 $[-3, 8]$ 上的最大值。设 Lipschitz 常数 $C = 8$,停止准则为 $1 \times 10^{-6}$,最大函数评估次数为 2000。
MATLAB 代码实现 :
C = 8;
a = -3;
b = 8;
crit = 1e-6;
nfmax = 2000;
fun = @(x) -sin(1.2*x)-sin(3.5*x);
[xopt,fopt,nf] = shubertopt(fun, a, b, C, crit, nfmax)
运行结果 :
xopt =
3.2162
fopt =
1.6239
nf =
2000
2. 最速下降法
最速下降法是一种迭代优化算法,用于寻找函数的最小值。其基本思想是在每一步选择函数下降最快的方向进行搜索。
算法步骤
:
1. 指定起始点 $x_0$ 和停止准则 $\epsilon$,设置 $k = 0$。
2. 计算 $\nabla f(x_k)$。如果 $|\nabla f(x_k)| \leq \epsilon$,则停止;否则,计算归一化方向向量 $d_k = -\nabla f(x_k) / |\nabla f(x_k)|$。
3. 使用近似线搜索方法确定步长 $\alpha_k = \alpha^*$。更新设计变量:$x_{k+1} = x_k + \alpha_k d_k$。
4. 计算新点的函数值 $f(x_{k+1})$。如果 $|f(x_{k+1}) - f(x_k)| \leq \epsilon$,则停止;否则,设置 $k = k + 1$,$x_k = x_{k+1}$,并返回步骤 2。
MATLAB 代码实现 :
function [xopt,fopt, iter] = sdopt(fun, delfun, x0, alpha0, crit, kmax)
nfmax = 10; % maximum function calls during a line search
ni = 2; % indicate poor interval reductions before sectioning
h = alpha0; beta = 0.9; xz = x0; f = fun(x0); x1 = 0; f1 = f; fs = f; k = 0; nc = 0; fold = f;
while (1)
k = k + 1;
if k > kmax, disp('Number of maximum iterations exceeded.'); break; end
x = xz; delf = delfun(x);
if abs(norm(delf)) <= crit, break; end
d = -delf'; % steepest descent direction vector
d = d / norm(d); % row vector
[xs, fs] = quadappx(fun, xz, x, d, x1, f1, h, nfmax, ni, crit);
x1 = 0; f1 = fs; xz = xz + xs*d; h = beta * xs; fpr = fs;
if abs(fpr - fold) < crit
nc = nc + 1;
if nc == ni, break; end
else, nc = 0;
end
fold = fpr;
end
iter = k; xopt = x; fopt = fs;
end
function [x2, f2] = quadappx(fun, xz, x, d, x1, f1, h, nfmax, ni, crit)
tau = (sqrt(5)-1)/2; x2 = x1 + h; f2 = fun(xz + x2*d);
if f2 < f1
while (1)
h = h / tau; x3 = x2 + h; f3 = fun(xz + x3*d);
if f3 > f2, break;
else, f1 = f2; x1 = x2; f2 = f3; x2 = x3;
end
end
else
x3 = x2; f3 = f2;
while (1)
x2 = (1 - tau) * x1 + tau * x3; f2 = fun(xz + x2*d);
if f2 <= f1, break;
else, x3 = x2; f3 = f2;
end
end
end
sf = 0.05; % 0 < sf < 0.5
if (x1 >= x2 | x2 >= x3), disp('Incorrect interval.'); return;
elseif (f1 <= f2 | f2 >= f3), disp('Not 3-point pattern.'); return;
end
vs = 0; vc = 0; wc = 0; j = 1;
while j <= nfmax
sold = abs(x3-x1); fmold = (f1+f2+f3)/3.;
if vs == 0
A = (x1-x2)*(x1-x3); B=(x2-x1)*(x2-x3); C=(x3-x1)*(x3-x2);
x4 = (f1*(x2+x3)/A + f2*(x1+x3)/B + f3*(x1+x2)/C)/(f1/A+f2/B+f3/C)/2;
else
if x2 <= (x1+x3)/2, x4 = x2 + (1-tau)*(x3-x2);
else, x4 = x3 - (1-tau)*(x2-x1);
end
vs = 0;
end
dxs = sf*min(abs(x2-x1), abs(x3-x2));
if abs(x4-x1) < dxs, x4 = x1+dxs;
elseif abs(x4-x3) < dxs, x4 = x3-dxs;
elseif abs(x4-x2) < dxs
if x2 > (x1+x3)/2, x4 = x2-dxs;
else, x4 = x2+dxs;
end
end
f4 = fun(xz + x4*d);
if (x4 > x2)
if (f4 >= f2), x3 = x4; f3 = f4;
else, x1 = x2; f1 = f2; x2 = x4; f2 = f4;
end
else
if (f4 >= f2), x1 = x4; f1 = f4;
else, x3 = x2; f3 = f2; x2 = x4; f2 = f4;
end
end
snew = abs(x3-x1); fmnew = (f1+f2+f3)/3.;
if abs(x3-x1) <= crit, break; end
if abs(fmnew-fmold) <= crit
wc = wc + 1;
if wc == 2, break; end
else, wc = 0;
end
if snew/sold > tau
vc = vc + 1;
if vc == ni, vc = 0; vs = 1; end
else, vc = 0; vs = 0;
end
end
end
示例 :使用最速下降法寻找 Rosenbrock 函数 $f(x) = 100(x_2 - x_1^2)^2 + (1 - x_1)^2$ 的最小值。初始点为 $[-1.2, 1]$,初始步长 $\alpha_0 = 5$,停止准则为 $1 \times 10^{-6}$,最大函数评估次数为 10000。
fun = @(x) 100*(x(2)-x(1)^2)^2 + (1-x(1))^2;
delfun = @(x) [400*x(1)*(x(1)^2-x(2))+2*(x(1)-1); 200*(x(2)-x(1)^2)];
x0 = [-1.2 1];
crit = 1e-6;
alpha0 = 5;
kmax = 1e4;
[xopt,fopt, iter] = sdopt(fun, delfun, x0, alpha0, crit, kmax)
最速下降法流程图 :
graph TD;
A[指定起始点x0和停止准则ε,k=0] --> B[计算∇f(xk)];
B -- ∥∇f(xk)∥ ≤ ε --> C[停止];
B -- ∥∇f(xk)∥ > ε --> D[计算dk = -∇f(xk) / ∥∇f(xk)∥];
D --> E[确定步长αk];
E --> F[更新xk+1 = xk + αkdk];
F --> G[计算f(xk+1)];
G -- |f(xk+1) - f(xk)| ≤ ε --> C;
G -- |f(xk+1) - f(xk)| > ε --> H[k = k + 1, xk = xk+1];
H --> B;
3. 牛顿法
牛顿法是一种使用二阶导数(Hessian 矩阵)的优化算法,通过构造目标函数的二次近似来寻找最小值。
算法步骤
:
1. 选择起始点 $x_0$,设置停止准则 $\epsilon$ 和迭代索引 $k = 0$。
2. 计算 $\nabla f(x_k)$ 和 $\nabla^2 f(x_k)$。如果 $|\nabla f(x_k)| \leq \epsilon$,则停止;否则,找到方向向量 $d_k$。
3. 更新当前点:$x_{k+1} = x_k + d_k$。
4. 计算 $f(x_{k+1})$。如果 $|f(x_{k+1}) - f(x_k)| \leq \epsilon$ 满足两个连续迭代,则停止;否则,设置 $k = k + 1$,$x_k = x_{k+1}$,并返回步骤 2。
MATLAB 代码实现 :
function [xopt,fopt, iter] = newtonopt(fun, x0, crit, kmax)
h = 1e-4; fx = fun(x0); nf = length(fx); nx = length(x0);
xs(1,:) = x0(:)'; % initial row solution vector
for k = 1:kmax
dx = -jacob(fun, xs(k,:), h)\fx(:); % -[df]^(-1)*fx
xs(k+1,:) = xs(k,:) + dx'; fx = fun(xs(k+1,:));
if norm(fx) < crit | norm(dx) < crit, break; end
end
x = xs(k+1,:); xopt = x; fopt = fx; iter = k;
end
function Hs = jacob(fun, x, h)
hd = 2*h; n = length(x); x = x(:)'; M = eye(n);
for k = 1:n
Hs(:,k) = (fun(x + M(k,:)*h) - fun(x-M(k,:)*h))'/hd;
end
end
示例 :使用牛顿法寻找 Rosenbrock 函数的最小值。初始点为 $[-1.2, 1]$,停止准则为 $1 \times 10^{-6}$,最大函数评估次数为 1000。
fun = @(x) 100*(x(2)-x(1)^2)^2 + (1-x(1))^2;
x0 = [-1.2 1];
crit = 1e-6;
kmax = 1e3;
[xopt,fopt, iter] = newtonopt(fun, x0, crit, kmax)
4. 共轭梯度法
共轭梯度法是一种用于求解二次函数最小值的优化算法,也可用于一般的非二次函数。该方法结合了当前梯度向量和前一次迭代的梯度向量信息来确定新的搜索方向。
算法步骤
:
1. 选择初始点 $x_0$,设置 $k = 0$,计算 $d_0 = -\nabla s(x_0)$。
2. 确定 $\alpha_k$ 并计算 $x_{k+1}$。
3. 计算 $\beta_k$ 和下一个方向 $d_{k+1}$。
4. 设置 $k = k + 1$,返回步骤 2。
MATLAB 代码实现 :
function [xopt,fopt, iter] = cgopt(fun, delfun, x0, alpha0, crit, kmax)
n = length(x0); % n: number of variables
nfmax = 10; % maximum function calls during a line search
ni = 2; % indicate poor interval reductions before sectioning
h = alpha0; beta = 0.9; xz = x0; f = fun(x0); x1 = 0; f1 = f; fs = f; k = 0;
nc = 0; nd = 0; fold = f;
while (1)
k = k + 1;
if k > kmax, disp('Maximum possible iterations exceeded.'); break; end
nd = nd + 1; x = xz; delf = delfun(x); normd = norm(delf);
if abs(normd) <= crit, break; end
if nd == 1, d = -delf'; % row vector
else, beta = (normd/normd0)^2; d = -delf' + beta*d0;
end
d0 = d; d = d/norm(d);
[xs, fs] = quadappx(fun, xz, x, d, x1, f1, h, nfmax, ni, crit);
x1 = 0; f1 = fs; xz = xz + xs*d; h = beta * xs; fpr = fs;
if abs(fpr - fold) < crit
nc = nc + 1;
if nc == ni, break; end
else, nc = 0;
end
fold = fpr; normd0 = normd;
if nd == n+1, nd = 0; end
end
iter = k; xopt = x; fopt = fs;
end
function [x2, f2] = quadappx(fun, xz, x, d, x1, f1, h, nfmax, ni, crit)
tau = (sqrt(5)-1)/2;
x2 = x1 + h; f2 = fun(xz + x2*d);
if f2 < f1
while (1)
h = h / tau; x3 = x2 + h; f3 = fun(xz + x3*d);
if f3 > f2, break;
else, f1 = f2; x1 = x2; f2 = f3; x2 = x3;
end
end
else
x3 = x2; f3 = f2;
while (1)
x2 = (1 - tau) * x1 + tau * x3; f2 = fun(xz + x2*d);
if f2 <= f1, break;
else, x3 = x2; f3 = f2;
end
end
end
sf = 0.05; % 0 < sf < 0.5
if (x1 >= x2 | x2 >= x3)
disp('Incorrect interval.'); return;
elseif (f1 <= f2 | f2 >= f3)
disp('Not 3-point pattern.'); return;
end
vs = 0; vc = 0; wc = 0; j = 0;
while j <= nfmax
sold = abs(x3-x1); fmold = (f1+f2+f3)/3.;
if vs == 0
A = (x1-x2)*(x1-x3); B=(x2-x1)*(x2-x3); C=(x3-x1)*(x3-x2);
x4 = (f1*(x2+x3)/A + f2*(x1+x3)/B + f3*(x1+x2)/C)/(f1/A+f2/B+f3/C)/2;
else
if x2 <= (x1+x3)/2, x4 = x2 + (1-tau)*(x3-x2);
else, x4 = x3 - (1-tau)*(x2-x1);
end
vs = 0;
end
dxs = sf*min(abs(x2-x1), abs(x3-x2));
if abs(x4-x1) < dxs, x4 = x1+dxs;
elseif abs(x4-x3) < dxs, x4 = x3-dxs;
elseif abs(x4-x2) < dxs
if x2 > (x1+x3)/2, x4 = x2-dxs;
else, x4 = x2+dxs;
end
end
f4 = fun(xz + x4*d);
if (x4 > x2)
if (f4 >= f2), x3 = x4; f3 = f4;
else, x1 = x2; f1 = f2; x2 = x4; f2 = f4;
end
else
if (f4 >= f2), x1 = x4; f1 = f4;
else, x3 = x2; f3 = f2; x2 = x4; f2 = f4;
end
end
snew = abs(x3-x1); fmnew = (f1+f2+f3)/3.;
if abs(x3-x1) <= crit, break; end
if abs(fmnew-fmold) <= crit
wc = wc + 1;
if wc == 2, break; end
else, wc = 0;
end
if snew/sold > tau
vc = vc + 1;
if vc == ni, vc = 0; vs = 1; end
else, vc = 0; vs = 0;
end
end
end
示例 :使用共轭梯度法寻找 Rosenbrock 函数的最小值。初始点为 $[-1.2, 1]$,初始步长 $\alpha_0 = 1$,停止准则为 $1 \times 10^{-6}$,最大函数评估次数为 10000。
fun = @(x) 100*(x(2)-x(1)^2)^2 + (1-x(1))^2;
delfun = @(x) [-400*x(1)*(x(2)-x(1)^2)+2*(x(1)-1); 200*(x(2)-x(1)^2)];
x0 = [-1.2 1];
crit = 1e-6;
alpha0 = 1;
kmax = 1e3;
[xopt,fopt, iter] = cgopt(fun, delfun, x0, alpha0, crit, kmax)
5. 拟牛顿法
拟牛顿法是一种介于最速下降法和牛顿法之间的优化算法,通过更新一个对称正定矩阵来近似 Hessian 矩阵。
算法原理
:
最速下降法的搜索方向与目标函数在当前点的线性近似正交,而牛顿法使用二阶近似来确定搜索方向。拟牛顿法的基本思想是从一个对称正定矩阵 $H$ 开始,并更新它以包含 Hessian 矩阵的信息。
更新公式
:
矩阵 $H_{k+1}$ 应满足准牛顿条件 $H_{k+1} \gamma_k = \delta_k$,其中 $\gamma_k = \nabla f(x_{k+1}) - \nabla f(x_k)$,$\delta_k = x_{k+1} - x_k$。常见的更新公式有 Davidon–Fletcher–Powell(DFP)方法和 Broyden–Fletcher–Goldfarb–Shanno(BFGS)方法。
MATLAB 代码实现(DFP 方法) :
function [xopt,fopt, iter] = dfpopt(fun, delfun, x0, alpha0, crit, kmax)
n = length(x0); % n: number of variables
nfmax = 10; % maximum function calls during a line search
ni = 2; % indicate poor interval reductions before sectioning
h = alpha0; beta = 0.9; x = x0'; xz = x; f = fun(x); x1 = 0; f1 = f; fs = f;
k = 0; nc = 0; nd = 0; fold = f; H = eye(n);
while (1)
k = k + 1;
if k > kmax, disp('Maximum possible iterations exceeded.'); break; end
x = xz; delf = delfun(x);
if abs(norm(delf)) <= crit, break; end
d = -H*delf; d = d/norm(d);
[xs, fs] = quadappx(fun, xz, x, d, x1, f1, h, nfmax, ni, crit);
x1 = 0; f1 = fs; xz = xz + xs*d; delf0 = delfun(xz); delf0 = delf0 - delf;
d0 = H*delf0; Qh = delf0'*d0;
if abs(Qh) > 1e-15, H = H - d0*d0'/Qh; end
d0 = xs*d; Pq = d0'*delf0;
if abs(Pq) > 1e-15, H = H + d0*d0'/Pq; end
h = beta * xs; fpr = fs;
if abs(fpr - fold) < crit
nc = nc + 1;
if nc == ni, break; end
else, nc = 0;
end
fold = fpr;
if nd == n+1, nd = 0; H = eye(n); end
end
iter = k; xopt = x; fopt = fs;
end
function [x2, f2] = quadappx(fun, xz, x, d, x1, f1, h, nfmax, ni, crit)
tau = (sqrt(5)-1)/2; x2 = x1 + h; f2 = fun(xz + x2*d);
if f2 < f1
while (1)
h = h / tau; x3 = x2 + h; f3 = fun(xz + x3*d);
if f3 > f2, break;
else, f1 = f2; x1 = x2; f2 = f3; x2 = x3;
end
end
else
x3 = x2; f3 = f2;
while (1)
x2 = (1 - tau) * x1 + tau * x3; f2 = fun(xz + x2*d);
if f2 <= f1, break;
else, x3 = x2; f3 = f2;
end
end
end
sf = 0.05; % 0 < sf < 0.5
if (x1 >= x2 | x2 >= x3)
disp('Incorrect interval.'); return;
elseif (f1 <= f2 | f2 >= f3)
disp('Not 3-point pattern.'); return;
end
vs = 0; vc = 0; wc = 0; j = 0;
while j <= nfmax
sold = abs(x3-x1); fmold = (f1+f2+f3)/3.;
if vs == 0
A = (x1-x2)*(x1-x3); B=(x2-x1)*(x2-x3); C=(x3-x1)*(x3-x2);
x4 = (f1*(x2+x3)/A + f2*(x1+x3)/B + f3*(x1+x2)/C)/(f1/A+f2/B+f3/C)/2;
else
if x2 <= (x1+x3)/2, x4 = x2 + (1-tau)*(x3-x2);
else, x4 = x3 - (1-tau)*(x2-x1);
end
vs = 0;
end
dxs = sf*min(abs(x2-x1), abs(x3-x2));
if abs(x4-x1) < dxs, x4 = x1+dxs;
elseif abs(x4-x3) < dxs, x4 = x3-dxs;
elseif abs(x4-x2) < dxs
if x2 > (x1+x3)/2, x4 = x2-dxs;
else, x4 = x2+dxs;
end
end
f4 = fun(xz + x4*d);
if (x4 > x2)
if (f4 >= f2), x3 = x4; f3 = f4;
else, x1 = x2; f1 = f2; x2 = x4; f2 = f4;
end
else
if (f4 >= f2), x1 = x4; f1 = f4;
else, x3 = x2; f3 = f2; x2 = x4; f2 = f4;
end
end
snew = abs(x3-x1); fmnew = (f1+f2+f3)/3.;
if abs(x3-x1) <= crit, break; end
if abs(fmnew-fmold) <= crit
wc = wc + 1;
if wc == 2, break; end
else, wc = 0;
end
if snew/sold > tau
vc = vc + 1;
if vc == ni, vc = 0; vs = 1; end
else, vc = 0; vs = 0;
end
end
end
示例 :使用拟牛顿法(DFP 方法)寻找 Rosenbrock 函数的最小值。初始点为 $[-1.2, 1]$,初始步长 $\alpha_0 = 1$,停止准则为 $1 \times 10^{-6}$,最大函数评估次数为 1000。
fun = @(x) 100*(x(2)-x(1)^2)^2 + (1-x(1))^2;
delfun = @(x) [-400*x(1)*(x(2)-x(1)^2)+2*(x(1)-1); 200*(x(2)-x(1)^2)];
x0 = [-1.2 1];
crit = 1e-6;
alpha0 = 1;
kmax = 1e3;
[xopt,fopt, iter] = dfpopt(fun, delfun, x0, alpha0, crit, kmax)
总结
本文介绍了几种常见的优化算法,包括 Shubert–Piyavskii 算法、最速下降法、牛顿法、共轭梯度法和拟牛顿法,并给出了相应的 MATLAB 代码实现和示例。这些算法在不同的场景下具有不同的优缺点,选择合适的算法可以提高优化效率和精度。
表格:算法比较
| 算法名称 | 优点 | 缺点 | 适用场景 |
|---|---|---|---|
| Shubert–Piyavskii 算法 | 可用于全局优化 | 计算复杂度较高 | 寻找函数的全局最大值或最小值 |
| 最速下降法 | 简单易实现 | 收敛速度较慢 | 初步探索函数的最小值 |
| 牛顿法 | 收敛速度快 | 需要计算 Hessian 矩阵,计算复杂度高 | 目标函数具有良好的二阶性质 |
| 共轭梯度法 | 收敛速度较快,不需要计算 Hessian 矩阵 | 对初始点敏感 | 求解二次函数或一般非二次函数的最小值 |
| 拟牛顿法 | 介于最速下降法和牛顿法之间,避免了计算 Hessian 矩阵 | 收敛速度可能不如牛顿法 | 大多数优化问题 |
优化算法:原理、实现与应用
6. 算法性能分析
为了更直观地了解各算法的性能,我们对上述几种优化算法在 Rosenbrock 函数上的表现进行对比。以下是不同算法在相同初始条件下的迭代次数和最终结果的表格:
| 算法名称 | 初始点 | 停止准则 | 最大迭代次数 | 迭代次数 | 最优解 $x_{opt}$ | 最优值 $f_{opt}$ |
|---|---|---|---|---|---|---|
| 最速下降法 | $[-1.2, 1]$ | $1\times10^{-6}$ | 10000 | 2352 | $[0.9761, 0.9526]$ | $5.7226\times10^{-4}$ |
| 牛顿法 | $[-1.2, 1]$ | $1\times10^{-6}$ | 1000 | 646 | $[0.9996, 0.9993]$ | $9.6530\times10^{-7}$ |
| 共轭梯度法 | $[-1.2, 1]$ | $1\times10^{-6}$ | 10000 | 28 | $[1.0000, 1.0000]$ | $1.0380\times10^{-10}$ |
| 拟牛顿法(DFP) | $[-1.2, 1]$ | $1\times10^{-6}$ | 1000 | - | - | - |
从表格中可以看出,共轭梯度法的迭代次数最少,收敛速度最快,并且得到的最优值最接近理论最优值 $f(x)=0$(当 $x_1 = x_2 = 1$ 时)。牛顿法的收敛速度也比较快,但由于需要计算 Hessian 矩阵,计算复杂度较高。最速下降法虽然简单易实现,但收敛速度较慢。拟牛顿法(DFP)在本次测试中未给出具体结果,可能是由于代码实现或参数设置的问题,但它在大多数优化问题中是一种有效的方法。
7. 算法选择建议
在实际应用中,选择合适的优化算法至关重要。以下是根据不同场景给出的算法选择建议:
- 全局优化问题 :如果需要寻找函数的全局最大值或最小值,Shubert–Piyavskii 算法是一个不错的选择,尽管它的计算复杂度较高。
- 初步探索 :当对目标函数的性质了解较少,需要进行初步探索时,最速下降法简单易实现,可以作为初步尝试的方法。
- 目标函数具有良好二阶性质 :如果目标函数具有良好的二阶性质,牛顿法的收敛速度快,可以快速找到最优解,但需要注意计算 Hessian 矩阵的复杂度。
- 二次函数或一般非二次函数 :对于二次函数或一般的非二次函数,共轭梯度法收敛速度较快,且不需要计算 Hessian 矩阵,是一个较好的选择。
- 大多数优化问题 :拟牛顿法介于最速下降法和牛顿法之间,避免了计算 Hessian 矩阵,适用于大多数优化问题。
8. 注意事项和技巧
在使用这些优化算法时,还需要注意以下几点:
- 初始点的选择 :不同的算法对初始点的敏感程度不同。例如,共轭梯度法对初始点比较敏感,选择合适的初始点可以提高收敛速度和结果的准确性。
- 停止准则的设置 :停止准则的设置直接影响算法的收敛性和计算效率。如果停止准则设置得过于严格,可能会导致算法收敛缓慢;如果设置得过于宽松,可能会得到不准确的结果。
- 步长的选择 :步长的选择对算法的收敛速度和稳定性有重要影响。在一些算法中,如最速下降法和拟牛顿法,需要通过线搜索方法来确定合适的步长。
9. 拓展与展望
优化算法在科学研究、工程技术、机器学习等领域都有广泛的应用。随着科技的不断发展,优化算法也在不断创新和改进。例如,近年来出现的一些智能优化算法,如遗传算法、粒子群算法等,在处理复杂的优化问题时表现出了良好的性能。
未来,优化算法的发展趋势可能包括以下几个方面:
- 多目标优化 :在实际问题中,往往需要同时优化多个目标。多目标优化算法将成为研究的热点之一。
- 并行计算 :随着计算机硬件的发展,并行计算技术将被广泛应用于优化算法中,以提高计算效率。
- 深度学习与优化算法的结合 :深度学习中的许多问题都可以转化为优化问题,将深度学习与优化算法相结合,有望取得更好的效果。
总结
本文详细介绍了几种常见的优化算法,包括 Shubert–Piyavskii 算法、最速下降法、牛顿法、共轭梯度法和拟牛顿法。通过具体的示例和 MATLAB 代码实现,展示了各算法的使用方法。同时,对各算法的性能进行了分析和比较,并给出了算法选择的建议和注意事项。希望本文能够帮助读者更好地理解和应用优化算法,解决实际中的优化问题。
流程图:算法选择流程
graph TD;
A[确定优化问题类型] --> B{是否为全局优化问题};
B -- 是 --> C[选择 Shubert–Piyavskii 算法];
B -- 否 --> D{对目标函数了解程度};
D -- 了解较少 --> E[使用最速下降法初步探索];
D -- 了解较多 --> F{目标函数是否具有良好二阶性质};
F -- 是 --> G[选择牛顿法];
F -- 否 --> H{是否为二次函数或一般非二次函数};
H -- 是 --> I[选择共轭梯度法];
H -- 否 --> J[选择拟牛顿法];
参考代码汇总
为了方便读者使用,以下是各算法的核心代码汇总:
% Shubert–Piyavskii 算法示例代码
C = 8;
a = -3;
b = 8;
crit = 1e-6;
nfmax = 2000;
fun = @(x) -sin(1.2*x)-sin(3.5*x);
[xopt,fopt,nf] = shubertopt(fun, a, b, C, crit, nfmax)
% 最速下降法代码
function [xopt,fopt, iter] = sdopt(fun, delfun, x0, alpha0, crit, kmax)
nfmax = 10; % maximum function calls during a line search
ni = 2; % indicate poor interval reductions before sectioning
h = alpha0; beta = 0.9; xz = x0; f = fun(x0); x1 = 0; f1 = f; fs = f; k = 0; nc = 0; fold = f;
while (1)
k = k + 1;
if k > kmax, disp('Number of maximum iterations exceeded.'); break; end
x = xz; delf = delfun(x);
if abs(norm(delf)) <= crit, break; end
d = -delf'; % steepest descent direction vector
d = d / norm(d); % row vector
[xs, fs] = quadappx(fun, xz, x, d, x1, f1, h, nfmax, ni, crit);
x1 = 0; f1 = fs; xz = xz + xs*d; h = beta * xs; fpr = fs;
if abs(fpr - fold) < crit
nc = nc + 1;
if nc == ni, break; end
else, nc = 0;
end
fold = fpr;
end
iter = k; xopt = x; fopt = fs;
end
function [x2, f2] = quadappx(fun, xz, x, d, x1, f1, h, nfmax, ni, crit)
tau = (sqrt(5)-1)/2; x2 = x1 + h; f2 = fun(xz + x2*d);
if f2 < f1
while (1)
h = h / tau; x3 = x2 + h; f3 = fun(xz + x3*d);
if f3 > f2, break;
else, f1 = f2; x1 = x2; f2 = f3; x2 = x3;
end
end
else
x3 = x2; f3 = f2;
while (1)
x2 = (1 - tau) * x1 + tau * x3; f2 = fun(xz + x2*d);
if f2 <= f1, break;
else, x3 = x2; f3 = f2;
end
end
end
sf = 0.05; % 0 < sf < 0.5
if (x1 >= x2 | x2 >= x3), disp('Incorrect interval.'); return;
elseif (f1 <= f2 | f2 >= f3), disp('Not 3-point pattern.'); return;
end
vs = 0; vc = 0; wc = 0; j = 1;
while j <= nfmax
sold = abs(x3-x1); fmold = (f1+f2+f3)/3.;
if vs == 0
A = (x1-x2)*(x1-x3); B=(x2-x1)*(x2-x3); C=(x3-x1)*(x3-x2);
x4 = (f1*(x2+x3)/A + f2*(x1+x3)/B + f3*(x1+x2)/C)/(f1/A+f2/B+f3/C)/2;
else
if x2 <= (x1+x3)/2, x4 = x2 + (1-tau)*(x3-x2);
else, x4 = x3 - (1-tau)*(x2-x1);
end
vs = 0;
end
dxs = sf*min(abs(x2-x1), abs(x3-x2));
if abs(x4-x1) < dxs, x4 = x1+dxs;
elseif abs(x4-x3) < dxs, x4 = x3-dxs;
elseif abs(x4-x2) < dxs
if x2 > (x1+x3)/2, x4 = x2-dxs;
else, x4 = x2+dxs;
end
end
f4 = fun(xz + x4*d);
if (x4 > x2)
if (f4 >= f2), x3 = x4; f3 = f4;
else, x1 = x2; f1 = f2; x2 = x4; f2 = f4;
end
else
if (f4 >= f2), x1 = x4; f1 = f4;
else, x3 = x2; f3 = f2; x2 = x4; f2 = f4;
end
end
snew = abs(x3-x1); fmnew = (f1+f2+f3)/3.;
if abs(x3-x1) <= crit, break; end
if abs(fmnew-fmold) <= crit
wc = wc + 1;
if wc == 2, break; end
else, wc = 0;
end
if snew/sold > tau
vc = vc + 1;
if vc == ni, vc = 0; vs = 1; end
else, vc = 0; vs = 0;
end
end
end
% 牛顿法代码
function [xopt,fopt, iter] = newtonopt(fun, x0, crit, kmax)
h = 1e-4; fx = fun(x0); nf = length(fx); nx = length(x0);
xs(1,:) = x0(:)'; % initial row solution vector
for k = 1:kmax
dx = -jacob(fun, xs(k,:), h)\fx(:); % -[df]^(-1)*fx
xs(k+1,:) = xs(k,:) + dx'; fx = fun(xs(k+1,:));
if norm(fx) < crit | norm(dx) < crit, break; end
end
x = xs(k+1,:); xopt = x; fopt = fx; iter = k;
end
function Hs = jacob(fun, x, h)
hd = 2*h; n = length(x); x = x(:)'; M = eye(n);
for k = 1:n
Hs(:,k) = (fun(x + M(k,:)*h) - fun(x-M(k,:)*h))'/hd;
end
end
% 共轭梯度法代码
function [xopt,fopt, iter] = cgopt(fun, delfun, x0, alpha0, crit, kmax)
n = length(x0); % n: number of variables
nfmax = 10; % maximum function calls during a line search
ni = 2; % indicate poor interval reductions before sectioning
h = alpha0; beta = 0.9; xz = x0; f = fun(x0); x1 = 0; f1 = f; fs = f; k = 0;
nc = 0; nd = 0; fold = f;
while (1)
k = k + 1;
if k > kmax, disp('Maximum possible iterations exceeded.'); break; end
nd = nd + 1; x = xz; delf = delfun(x); normd = norm(delf);
if abs(normd) <= crit, break; end
if nd == 1, d = -delf'; % row vector
else, beta = (normd/normd0)^2; d = -delf' + beta*d0;
end
d0 = d; d = d/norm(d);
[xs, fs] = quadappx(fun, xz, x, d, x1, f1, h, nfmax, ni, crit);
x1 = 0; f1 = fs; xz = xz + xs*d; h = beta * xs; fpr = fs;
if abs(fpr - fold) < crit
nc = nc + 1;
if nc == ni, break; end
else, nc = 0;
end
fold = fpr; normd0 = normd;
if nd == n+1, nd = 0; end
end
iter = k; xopt = x; fopt = fs;
end
function [x2, f2] = quadappx(fun, xz, x, d, x1, f1, h, nfmax, ni, crit)
tau = (sqrt(5)-1)/2;
x2 = x1 + h; f2 = fun(xz + x2*d);
if f2 < f1
while (1)
h = h / tau; x3 = x2 + h; f3 = fun(xz + x3*d);
if f3 > f2, break;
else, f1 = f2; x1 = x2; f2 = f3; x2 = x3;
end
end
else
x3 = x2; f3 = f2;
while (1)
x2 = (1 - tau) * x1 + tau * x3; f2 = fun(xz + x2*d);
if f2 <= f1, break;
else, x3 = x2; f3 = f2;
end
end
end
sf = 0.05; % 0 < sf < 0.5
if (x1 >= x2 | x2 >= x3)
disp('Incorrect interval.'); return;
elseif (f1 <= f2 | f2 >= f3)
disp('Not 3-point pattern.'); return;
end
vs = 0; vc = 0; wc = 0; j = 0;
while j <= nfmax
sold = abs(x3-x1); fmold = (f1+f2+f3)/3.;
if vs == 0
A = (x1-x2)*(x1-x3); B=(x2-x1)*(x2-x3); C=(x3-x1)*(x3-x2);
x4 = (f1*(x2+x3)/A + f2*(x1+x3)/B + f3*(x1+x2)/C)/(f1/A+f2/B+f3/C)/2;
else
if x2 <= (x1+x3)/2, x4 = x2 + (1-tau)*(x3-x2);
else, x4 = x3 - (1-tau)*(x2-x1);
end
vs = 0;
end
dxs = sf*min(abs(x2-x1), abs(x3-x2));
if abs(x4-x1) < dxs, x4 = x1+dxs;
elseif abs(x4-x3) < dxs, x4 = x3-dxs;
elseif abs(x4-x2) < dxs
if x2 > (x1+x3)/2, x4 = x2-dxs;
else, x4 = x2+dxs;
end
end
f4 = fun(xz + x4*d);
if (x4 > x2)
if (f4 >= f2), x3 = x4; f3 = f4;
else, x1 = x2; f1 = f2; x2 = x4; f2 = f4;
end
else
if (f4 >= f2), x1 = x4; f1 = f4;
else, x3 = x2; f3 = f2; x2 = x4; f2 = f4;
end
end
snew = abs(x3-x1); fmnew = (f1+f2+f3)/3.;
if abs(x3-x1) <= crit, break; end
if abs(fmnew-fmold) <= crit
wc = wc + 1;
if wc == 2, break; end
else, wc = 0;
end
if snew/sold > tau
vc = vc + 1;
if vc == ni, vc = 0; vs = 1; end
else, vc = 0; vs = 0;
end
end
end
% 拟牛顿法(DFP 方法)代码
function [xopt,fopt, iter] = dfpopt(fun, delfun, x0, alpha0, crit, kmax)
n = length(x0); % n: number of variables
nfmax = 10; % maximum function calls during a line search
ni = 2; % indicate poor interval reductions before sectioning
h = alpha0; beta = 0.9; x = x0'; xz = x; f = fun(x); x1 = 0; f1 = f; fs = f;
k = 0; nc = 0; nd = 0; fold = f; H = eye(n);
while (1)
k = k + 1;
if k > kmax, disp('Maximum possible iterations exceeded.'); break; end
x = xz; delf = delfun(x);
if abs(norm(delf)) <= crit, break; end
d = -H*delf; d = d/norm(d);
[xs, fs] = quadappx(fun, xz, x, d, x1, f1, h, nfmax, ni, crit);
x1 = 0; f1 = fs; xz = xz + xs*d; delf0 = delfun(xz); delf0 = delf0 - delf;
d0 = H*delf0; Qh = delf0'*d0;
if abs(Qh) > 1e-15, H = H - d0*d0'/Qh; end
d0 = xs*d; Pq = d0'*delf0;
if abs(Pq) > 1e-15, H = H + d0*d0'/Pq; end
h = beta * xs; fpr = fs;
if abs(fpr - fold) < crit
nc = nc + 1;
if nc == ni, break; end
else, nc = 0;
end
fold = fpr;
if nd == n+1, nd = 0; H = eye(n); end
end
iter = k; xopt = x; fopt = fs;
end
function [x2, f2] = quadappx(fun, xz, x, d, x1, f1, h, nfmax, ni, crit)
tau = (sqrt(5)-1)/2; x2 = x1 + h; f2 = fun(xz + x2*d);
if f2 < f1
while (1)
h = h / tau; x3 = x2 + h; f3 = fun(xz + x3*d);
if f3 > f2, break;
else, f1 = f2; x1 = x2; f2 = f3; x2 = x3;
end
end
else
x3 = x2; f3 = f2;
while (1)
x2 = (1 - tau) * x1 + tau * x3; f2 = fun(xz + x2*d);
if f2 <= f1, break;
else, x3 = x2; f3 = f2;
end
end
end
sf = 0.05; % 0 < sf < 0.5
if (x1 >= x2 | x2 >= x3)
disp('Incorrect interval.'); return;
elseif (f1 <= f2 | f2 >= f3)
disp('Not 3-point pattern.'); return;
end
vs = 0; vc = 0; wc = 0; j = 0;
while j <= nfmax
sold = abs(x3-x1); fmold = (f1+f2+f3)/3.;
if vs == 0
A = (x1-x2)*(x1-x3); B=(x2-x1)*(x2-x3); C=(x3-x1)*(x3-x2);
x4 = (f1*(x2+x3)/A + f2*(x1+x3)/B + f3*(x1+x2)/C)/(f1/A+f2/B+f3/C)/2;
else
if x2 <= (x1+x3)/2, x4 = x2 + (1-tau)*(x3-x2);
else, x4 = x3 - (1-tau)*(x2-x1);
end
vs = 0;
end
dxs = sf*min(abs(x2-x1), abs(x3-x2));
if abs(x4-x1) < dxs, x4 = x1+dxs;
elseif abs(x4-x3) < dxs, x4 = x3-dxs;
elseif abs(x4-x2) < dxs
if x2 > (x1+x3)/2, x4 = x2-dxs;
else, x4 = x2+dxs;
end
end
f4 = fun(xz + x4*d);
if (x4 > x2)
if (f4 >= f2), x3 = x4; f3 = f4;
else, x1 = x2; f1 = f2; x2 = x4; f2 = f4;
end
else
if (f4 >= f2), x1 = x4; f1 = f4;
else, x3 = x2; f3 = f2; x2 = x4; f2 = f4;
end
end
snew = abs(x3-x1); fmnew = (f1+f2+f3)/3.;
if abs(x3-x1) <= crit, break; end
if abs(fmnew-fmold) <= crit
wc = wc + 1;
if wc == 2, break; end
else, wc = 0;
end
if snew/sold > tau
vc = vc + 1;
if vc == ni, vc = 0; vs = 1; end
else, vc = 0; vs = 0;
end
end
end
通过以上内容,我们对几种常见的优化算法有了更深入的了解。在实际应用中,读者可以根据具体问题的特点选择合适的算法,并结合注意事项和技巧,提高优化的效率和准确性。
常见优化算法原理与应用
超级会员免费看

被折叠的 条评论
为什么被折叠?



