Ipopt(Interior Point OPTimizer)是求解大规模非线性最优化问题的求解软件。可以求解如下形式的最优化问题的(局部)最优解。
min⏟x∈Rn f(x)s.t.gL≤g(x)≤gUxL≤x≤xU(0)
\underbrace{min}_ {x \in Rⁿ} \, \, \, f(x) \\
s.t. g_L ≤ g(x) ≤ g_U \\
x_L ≤ x ≤ x_U
\tag{0}
x∈Rnminf(x)s.t.gL≤g(x)≤gUxL≤x≤xU(0)
其中,f(x):Rn→Rf(x): R^n \rightarrow Rf(x):Rn→R是优化目标函数,g(x):Rn→Rmg(x):R^n \rightarrow R^mg(x):Rn→Rm是约束函数,f(x),g(x)f(x),g(x)f(x),g(x)可以是非线性和非凸的,但是需要是二阶微分连续的。
为了求解最优化问题,Ipopt需要更多的信息,如下:
- 优化问题的维度
- 优化变量xxx的数目;
- 约束函数g(x)g(x)g(x)的数目;
- 优化变量的边界
- 优化变量xxx的边界;
- 约束函数g(x)g(x)g(x)的边界;
- 优化问题的初始迭代点:
- 优化变量xxx的初始值;
- 拉格朗日乘子的初始值(仅仅是在
warm start的时候需要);
- 优化问题的数据结构(
Structure):- 约束函数g(x)g(x)g(x)的雅可比矩阵的非零元素的数目;
- 拉格朗日函数的黑森矩阵的非零元素的数目;
- 约束函数g(x)g(x)g(x)的雅可比稀疏矩阵的非零元素的行索引和列索引(
sparsity structure,row and column indices of each of the nonzero entries); - 拉格朗日函数的黑森稀疏矩阵的非零元素的行索引和列索引(
sparsity structure,row and column indices of each of the nonzero entries);
- 优化问题函数的值:
- 优化目标函数f(x)f(x)f(x);
- 优化目标函数的梯度函数ccc;
- 约束函数g(x)g(x)g(x);
- 约束函数的雅可比矩阵∇g(x)T\nabla g(x) ^T∇g(x)T;
- 拉格朗日函数的黑森矩阵σf∇2f(x)+∑i=1mλi∇2gi(x)\sigma_f \nabla ^2 f(x) + \sum_{i=1}^m \lambda_i \nabla ^2 g_i(x)σf∇2f(x)+∑i=1mλi∇2gi(x),如果使用拟牛顿法(
quasi-Newton options)则不需要此矩阵;
优化问题的维度和边界约束可以直接获得,并且来自于问题定义。初始迭代点会影响优化问题的是否收敛或者是否收敛到(局部)最优解,不同的初始值可能会导致收敛到不同的局部最优解。计算微分矩阵(雅可比矩阵和黑森矩阵)可能有一点复杂,Ipopt需要提供约束函数的雅可比矩阵和拉格朗日函数的黑森矩阵的非零元素以及他们所在的行索引和列索引,并且标准接口是下三角矩阵(黑森矩阵是对称矩阵)。矩阵的非零元素确定后,在整个求解过程中是不可变的,因此,非零元素不可以仅仅包含在初始值条件下,还需要包括在求解过程中会不为零的元素。
1. Example
f=x1x4(x1+x2+x3)+x3s.t. x1x2x3x4≥25x12+x22+x32+x42=401≤x1,x2,x3,x4≤5x0=(1,5,5,1)(1-1) f = x_1 x_4 (x_1 + x_2 + x_3) + x_3 \\ s.t. \,\,\,\,\,\, x_1 x_2 x_3 x_4 \geq 25 \\ x^2_1 + x^2_2 + x^2_3 + x^2_4 = 40 \\ 1 \leq x_1, x_2, x_3, x_4 \leq 5 \\ x_0 = (1,5,5,1) \tag{1-1} f=x1x4(x1+x2+x3)+x3s.t.x1x2x3x4≥25x12+x22+x32+x42=401≤x1,x2,x3,x4≤5x0=(1,5,5,1)(1-1)
可以得到优化目标的梯度为:
∇f(x)=[x1x4+x4(x1+x2+x3)x1x4x1x4+1x1(x1+x2+x3)](1-2)
\nabla f(x) = \begin{bmatrix}
x_1 x_4 + x_4(x_1 + x_2 + x_3) \\
x_1 x_4 \\
x_1 x_4 + 1 \\
x_1 (x_1 + x_2 + x_3)
\end{bmatrix}
\tag{1-2}
∇f(x)=x1x4+x4(x1+x2+x3)x1x4x1x4+1x1(x1+x2+x3)(1-2)
可以得到约束函数的雅可比矩阵为:
∇g(x)=[x2x3x4x1x3x4x1x2x4x1x2x32x12x22x32x4](1-3)
\nabla g(x) = \begin{bmatrix}
x_2 x_3 x_4 & x_1 x_3 x_4 & x_1 x_2 x_4 & x_1 x_2 x_3 \\
2 x_1 & 2 x_2 & 2 x_3 & 2 x_4
\end{bmatrix}
\tag{1-3}
∇g(x)=[x2x3x42x1x1x3x42x2x1x2x42x3x1x2x32x4](1-3)
需要计算拉格朗日函数的黑森矩阵,如果使用拟牛顿法来近似二阶微分,则不需要提供黑森矩阵。但是黑森矩阵可以是计算有更好的鲁棒性和更快的收敛速度。NLP的拉格朗日函数定义为f(x)+g(x)Tλf(x) + g(x)^T \lambdaf(x)+g(x)Tλ,黑森矩阵定义为∇2f(x)+∑i=1mλi∇2gi(x)\nabla ^2 f(x) + \sum_{i=1}^m \lambda_i \nabla ^2 g_i(x)∇2f(x)+∑i=1mλi∇2gi(x),然而在Ipopt中引入σf\sigma_fσf使Ipopt可以分别确定优化目标函数和约束函数的黑森矩阵,因此Ipopt的黑森矩阵为σf∇2f(x)+∑i=1mλi∇2gi(x)\sigma_f \nabla ^2 f(x) + \sum_{i=1}^m \lambda_i \nabla ^2 g_i(x)σf∇2f(x)+∑i=1mλi∇2gi(x)。可以得到优化问题(1−1)(1-1)(1−1)的黑森矩阵为:
σf[2x4x4x42x1+x2+x3x400x1x400x12x1+x2+x3x1x10]+λ1[0x3x4x2x4x2x3x3x40x1x4x1x3x2x4x1x40x1x2x2x3x1x3x1x20]+λ2[2000020000200002](1-4)
\sigma_f
\begin{bmatrix}
2 x_4 & x_4 & x_4 & 2 x_1 + x_2 + x_3 \\
x_4 & 0 & 0 & x_1 \\
x_4 & 0 & 0 & x_1 \\
2 x_1 + x_2 + x_3 & x_1 & x_1 & 0
\end{bmatrix}+ \lambda_1
\begin{bmatrix}
0 & x_3 x_4 & x_2 x_4 & x_2 x_3 \\
x_3 x_4 & 0 & x_1 x_4 & x_1 x_3 \\
x_2 x_4 & x_1 x_4 & 0 & x_1 x_2 \\
x_2 x_3 & x_1 x_3 & x_1 x_2 & 0
\end{bmatrix}+ \lambda_2
\begin{bmatrix}
2 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 \\
0 & 0 & 2 & 0 \\
0 & 0 & 0 & 2
\end{bmatrix}
\tag{1-4}
σf2x4x4x42x1+x2+x3x400x1x400x12x1+x2+x3x1x10+λ10x3x4x2x4x2x3x3x40x1x4x1x3x2x4x1x40x1x2x2x3x1x3x1x20+λ22000020000200002(1-4)
其中,第一项是优化目标函数的黑森矩阵,第二项和第三项是约束函数的黑森矩阵,λ1,λ2\lambda_1,\lambda_2λ1,λ2是约束函数的拉格朗日乘子。
2. C++ Interface
需要继承纯虚基类Ipopt::TNLP来编写自己的求解类,并且需要重载999个Ipopt::TNLP基类的虚函数,Ipopt通过Ipopt::IpoptApplication类来求解最优化问题。
2.1 Ipopt::TNLP::get_nlp_info
virtual bool get_nlp_info(
Index& n,
Index& m,
Index& nnz_jac_g,
Index& nnz_h_lag,
IndexStyleEnum& index_style
) = 0;
Ipopt使用这个函数来确定数组的内存分配,这里如果发生问题,会引起内存泄漏等问题,很难去debug。
n:优化变量xxx的数目;m:约束函数g(x)g(x)g(x)的数目;nnz_jac_g:雅可比矩阵非零元素的数目;nnz_h_lag:黑森矩阵非零元素的数目;index_style:稀疏矩阵的索引使用C语言风格(从000开始),还是使用Fortran语言风格(从111开始);
上述例子中有444个优化变量,222个约束函数,雅可比矩阵中的非零元素个数为888,黑森矩阵中非零元素的个数为161616,由于是对称矩阵,因此下三角矩阵中非零元素个数为101010。
// returns the size of the problem
bool HS071_NLP::get_nlp_info(
Index& n,
Index& m,
Index& nnz_jac_g,
Index& nnz_h_lag,
IndexStyleEnum& index_style
)
{
// The problem described in HS071_NLP.hpp has 4 variables, x[0] through x[3]
n = 4;
// one equality constraint and one inequality constraint
m = 2;
// in this example the jacobian is dense and contains 8 nonzeros
nnz_jac_g = 8;
// the Hessian is also dense and has 16 total nonzeros, but we
// only need the lower left corner (since it is symmetric)
nnz_h_lag = 10;
// use the C style indexing (0-based)
index_style = TNLP::C_STYLE;
return true;
}
2.2 Ipopt::TNLP::get_bounds_info
virtual bool get_bounds_info(
Index n,
Number* x_l,
Number* x_u,
Index m,
Number* g_l,
Number* g_u
) = 0;
Ipopt使用这个函数来确定优化变量xxx的边界和约束函数g(x)g(x)g(x)的边界。
n:优化变量xxx的数目;x_l:优化变量xxx的下边界,数组;x_u:优化变量xxx的上边界,数组;m:约束函数g(x)g(x)g(x)的数目;g_l:约束函数g(x)g(x)g(x)的下边界,数组;g_u:约束函数g(x)g(x)g(x)的上边界,数组;
在Ipopt中默认设置边界值需要在(−109,109)(-10^9, 10^9)(−109,109)范围内,当不在此范围时,则认为是无穷大或者无穷小。
// returns the variable bounds
bool HS071_NLP::get_bounds_info(
Index n,
Number* x_l,
Number* x_u,
Index m,
Number* g_l,
Number* g_u
)
{
// here, the n and m we gave IPOPT in get_nlp_info are passed back to us.
// If desired, we could assert to make sure they are what we think they are.
assert(n == 4);
assert(m == 2);
// the variables have lower bounds of 1
for( Index i = 0; i < 4; i++ )
{
x_l[i] = 1.0;
}
// the variables have upper bounds of 5
for( Index i = 0; i < 4; i++ )
{
x_u[i] = 5.0;
}
// the first constraint g1 has a lower bound of 25
g_l[0] = 25;
// the first constraint g1 has NO upper bound, here we set it to 2e19.
// Ipopt interprets any number greater than nlp_upper_bound_inf as
// infinity. The default value of nlp_upper_bound_inf and nlp_lower_bound_inf
// is 1e19 and can be changed through ipopt options.
g_u[0] = 2e19;
// the second constraint g2 is an equality constraint, so we set the
// upper and lower bound to the same value
g_l[1] = g_u[1] = 40.0;
return true;
}
2.3 Ipopt::TNLP::get_starting_point
virtual bool get_starting_point(
Index n,
bool init_x,
Number* x,
bool init_z,
Number* z_L,
Number* z_U,
Index m,
bool init_lambda,
Number* lambda
) = 0;
Ipopt使用这个函数来确定迭代优化的起点。
n:优化变量xxx的数目;init_x:如果是ture,则需要提供优化变量xxx的初始值;x:优化变量xxx的初始值;
其他为dual variables的初始值,一般不用设置。在Ipopt中默认是需要设置xxx的初始值。
// returns the initial point for the problem
bool HS071_NLP::get_starting_point(
Index n,
bool init_x,
Number* x,
bool init_z,
Number* z_L,
Number* z_U,
Index m,
bool init_lambda,
Number* lambda
)
{
// Here, we assume we only have starting values for x, if you code
// your own NLP, you can provide starting values for the dual variables
// if you wish
assert(init_x == true);
assert(init_z == false);
assert(init_lambda == false);
// initialize to the given starting point
x[0] = 1.0;
x[1] = 5.0;
x[2] = 5.0;
x[3] = 1.0;
return true;
}
2.4 Ipopt::TNLP::eval_f
virtual bool eval_f(
Index n,
const Number* x,
bool new_x,
Number& obj_value
) = 0;
Ipopt使用这个函数来确定优化目标函数。
n:优化变量xxx的数目;x:优化变量xxx的值,用来计算f(x)f(x)f(x);new_x:在此之前调用的eval_*函数是否有错误发生,可以忽略;obj_value:f(x)f(x)f(x);
// returns the value of the objective function
bool HS071_NLP::eval_f(
Index n,
const Number* x,
bool new_x,
Number& obj_value
)
{
assert(n == 4);
obj_value = x[0] * x[3] * (x[0] + x[1] + x[2]) + x[2];
return true;
}
2.5 Ipopt::TNLP::eval_grad_f
virtual bool eval_grad_f(
Index n,
const Number* x,
bool new_x,
Number* grad_f
) = 0;
Ipopt使用这个函数来确定优化目标函数的梯度。
n:优化变量xxx的数目;x:优化变量xxx的值,用来计算∇f(x)\nabla f(x)∇f(x);new_x:在此之前调用的eval_*函数是否有错误发生,可以忽略;obj_value:∇f(x)\nabla f(x)∇f(x),数组的大小和xxx的数组大小一致;
// return the gradient of the objective function grad_{x} f(x)
bool HS071_NLP::eval_grad_f(
Index n,
const Number* x,
bool new_x,
Number* grad_f
)
{
assert(n == 4);
grad_f[0] = x[0] * x[3] + x[3] * (x[0] + x[1] + x[2]);
grad_f[1] = x[0] * x[3];
grad_f[2] = x[0] * x[3] + 1;
grad_f[3] = x[0] * (x[0] + x[1] + x[2]);
return true;
}
2.6 Ipopt::TNLP::eval_g
virtual bool eval_g(
Index n,
const Number* x,
bool new_x,
Index m,
Number* g
) = 0;
Ipopt使用这个函数来确定约束函数g(x)g(x)g(x)。
n:优化变量xxx的数目;x:优化变量xxx的值,用来计算∇f(x)\nabla f(x)∇f(x);new_x:在此之前调用的eval_*函数是否有错误发生,可以忽略;m:约束函数g(x)g(x)g(x)的数目;- g:g(x)g(x)g(x),数组的大小和mmm一致;
// return the value of the constraints: g(x)
bool HS071_NLP::eval_g(
Index n,
const Number* x,
bool new_x,
Index m,
Number* g
)
{
assert(n == 4);
assert(m == 2);
g[0] = x[0] * x[1] * x[2] * x[3];
g[1] = x[0] * x[0] + x[1] * x[1] + x[2] * x[2] + x[3] * x[3];
return true;
}
2.7 Ipopt::TNLP::eval_jac_g
virtual bool eval_jac_g(
Index n,
const Number* x,
bool new_x,
Index m,
Index nele_jac,
Index* iRow,
Index* jCol,
Number* values
) = 0;
Ipopt使用这个函数来确定约束函数g(x)g(x)g(x)的雅可比矩阵的非零元素的值,以及其在稀疏矩阵中的行索引值和列索引值。雅可比矩阵中的第iii行和第jjj列的元素值是gi(x)g_i(x)gi(x)对xjx_jxj的导数。
n:优化变量xxx的数目;x:优化变量xxx的值,用来计算∇g(x)T\nabla g(x)^T∇g(x)T;new_x:在此之前调用的eval_*函数是否有错误发生,可以忽略;m:约束函数g(x)g(x)g(x)的数目;nele_jac:雅可比矩阵非零元素的数目;iRow:存储雅可比矩阵非零元素在矩阵中的行索引值,如果是C语言风格,雅可比矩阵索引值从000开始;jCol:存储雅可比矩阵非零元素在矩阵中的列索引值,如果是C语言风格,雅可比矩阵索引值从000开始;values:存储雅可比矩阵中的非零元素;
需要注意的是:①iRow、jCol和values三个数组的大小是一致的,并且其储存的值应该和雅可比矩阵非零元素的行索引值、列索引值和非零元素值相对应;②数组iRow和jCol只需要被填写一次,即第一次调用此函数时填写iRow和jCol,第一次调用时x和values都是null,当Ipopt需要values的值时,传递iRow和jCol将会是null,此时对values的值进行填写。
// return the structure or values of the Jacobian
bool HS071_NLP::eval_jac_g(
Index n,
const Number* x,
bool new_x,
Index m,
Index nele_jac,
Index* iRow,
Index* jCol,
Number* values
)
{
assert(n == 4);
assert(m == 2);
if( values == NULL )
{
// return the structure of the Jacobian
// this particular Jacobian is dense
iRow[0] = 0;
jCol[0] = 0;
iRow[1] = 0;
jCol[1] = 1;
iRow[2] = 0;
jCol[2] = 2;
iRow[3] = 0;
jCol[3] = 3;
iRow[4] = 1;
jCol[4] = 0;
iRow[5] = 1;
jCol[5] = 1;
iRow[6] = 1;
jCol[6] = 2;
iRow[7] = 1;
jCol[7] = 3;
}
else
{
// return the values of the Jacobian of the constraints
values[0] = x[1] * x[2] * x[3]; // 0,0
values[1] = x[0] * x[2] * x[3]; // 0,1
values[2] = x[0] * x[1] * x[3]; // 0,2
values[3] = x[0] * x[1] * x[2]; // 0,3
values[4] = 2 * x[0]; // 1,0
values[5] = 2 * x[1]; // 1,1
values[6] = 2 * x[2]; // 1,2
values[7] = 2 * x[3]; // 1,3
}
return true;
}
2.8 Ipopt::TNLP::eval_h
virtual bool eval_h(
Index n,
const Number* x,
bool new_x,
Number obj_factor,
Index m,
const Number* lambda,
bool new_lambda,
Index nele_hess,
Index* iRow,
Index* jCol,
Number* values
)
Ipopt使用这个函数来确定拉格朗日函数黑森矩阵的非零元素的值,以及其在稀疏矩阵中的行索引值和列索引值。
n:优化变量xxx的数目;x:优化变量xxx的值,用来计算∇g(x)T\nabla g(x)^T∇g(x)T;new_x:在此之前调用的eval_*函数是否有错误发生,可以忽略;obj_factor:σf\sigma_fσf;m:约束函数g(x)g(x)g(x)的数目;lambda:拉格朗日乘子λ\lambdaλ;new_lambda:如果之前调用的函数使用相同的λ\lambdaλ则为false,一般忽略;nele_hess:黑森矩阵非零元素的个数(下三角矩阵);iRow:存储黑森矩阵非零元素在矩阵中的行索引值,如果是C语言风格,黑森矩阵索引值从000开始;jCol:存储黑森矩阵非零元素在矩阵中的列索引值,如果是C语言风格,黑森矩阵索引值从000开始;values:存储黑森矩阵中的非零元素的值;
需要注意的是:①iRow、jCol和values三个数组的大小是一致的,并且其储存的值应该和黑森矩阵非零元素的行索引值、列索引值和非零元素值相对应;②数组iRow和jCol只需要被填写一次,即第一次调用此函数时填写iRow和jCol,第一次调用时x、lambda和values都是null,当Ipopt需要values的值时,传递iRow和jCol将会是null,此时对values的值进行填写;③由于黑森矩阵是对称阵,Ipopt使用下三角矩阵;④Ipopt默认是需要黑森矩阵的,当使用拟牛顿法时,则不需要黑森矩阵。
在此例中,黑森矩阵是稠密的,但是仍然使用稀疏矩阵来表示。
//return the structure or values of the Hessian
bool HS071_NLP::eval_h(
Index n,
const Number* x,
bool new_x,
Number obj_factor,
Index m,
const Number* lambda,
bool new_lambda,
Index nele_hess,
Index* iRow,
Index* jCol,
Number* values
)
{
assert(n == 4);
assert(m == 2);
if( values == NULL )
{
// return the structure. This is a symmetric matrix, fill the lower left
// triangle only.
// the hessian for this problem is actually dense
Index idx = 0;
for( Index row = 0; row < 4; row++ )
{
for( Index col = 0; col <= row; col++ )
{
iRow[idx] = row;
jCol[idx] = col;
idx++;
}
}
assert(idx == nele_hess);
}
else
{
// return the values. This is a symmetric matrix, fill the lower left
// triangle only
// fill the objective portion
values[0] = obj_factor * (2 * x[3]); // 0,0
values[1] = obj_factor * (x[3]); // 1,0
values[2] = 0.; // 1,1
values[3] = obj_factor * (x[3]); // 2,0
values[4] = 0.; // 2,1
values[5] = 0.; // 2,2
values[6] = obj_factor * (2 * x[0] + x[1] + x[2]); // 3,0
values[7] = obj_factor * (x[0]); // 3,1
values[8] = obj_factor * (x[0]); // 3,2
values[9] = 0.; // 3,3
// add the portion for the first constraint
values[1] += lambda[0] * (x[2] * x[3]); // 1,0
values[3] += lambda[0] * (x[1] * x[3]); // 2,0
values[4] += lambda[0] * (x[0] * x[3]); // 2,1
values[6] += lambda[0] * (x[1] * x[2]); // 3,0
values[7] += lambda[0] * (x[0] * x[2]); // 3,1
values[8] += lambda[0] * (x[0] * x[1]); // 3,2
// add the portion for the second constraint
values[0] += lambda[1] * 2; // 0,0
values[2] += lambda[1] * 2; // 1,1
values[5] += lambda[1] * 2; // 2,2
values[9] += lambda[1] * 2; // 3,3
}
return true;
}
2.9 Ipopt::TNLP::finalize_solution
virtual void finalize_solution(
SolverReturn status,
Index n,
const Number* x,
const Number* z_L,
const Number* z_U,
Index m,
const Number* g,
const Number* lambda,
Number obj_value,
const IpoptData* ip_data,
IpoptCalculatedQuantities* ip_cq
) = 0;
Ipopt使用这个函数来得到最优化问题的求解结果,对其重要的值进行介绍。
status:求解器的状态;SUCCESS:在满足收敛条件的情况下,找到局部最优解;MAXITER_EXCEEDED:超出最大迭代次数;CPUTIME_EXCEEDED:超出最大求解时间;STOP_AT_ACCEPTABLE_POINT:求解收敛在某点,不满足期望的容差,但是在可接受范围内;LOCAL_INFEASIBILITY:在可行域内找不到最优解,一般是由于bounds和约束设置不合理导致的;
x:优化变量xxx的局部最优解的值;
void HS071_NLP::finalize_solution(
SolverReturn status,
Index n,
const Number* x,
const Number* z_L,
const Number* z_U,
Index m,
const Number* g,
const Number* lambda,
Number obj_value,
const IpoptData* ip_data,
IpoptCalculatedQuantities* ip_cq
)
{
// here is where we would store the solution to variables, or write to a file, etc
// so we could use the solution.
// For this example, we write the solution to the console
std::cout << std::endl << std::endl << "Solution of the primal variables, x" << std::endl;
for( Index i = 0; i < n; i++ )
{
std::cout << "x[" << i << "] = " << x[i] << std::endl;
}
std::cout << std::endl << std::endl << "Solution of the bound multipliers, z_L and z_U" << std::endl;
for( Index i = 0; i < n; i++ )
{
std::cout << "z_L[" << i << "] = " << z_L[i] << std::endl;
}
for( Index i = 0; i < n; i++ )
{
std::cout << "z_U[" << i << "] = " << z_U[i] << std::endl;
}
std::cout << std::endl << std::endl << "Objective value" << std::endl;
std::cout << "f(x*) = " << obj_value << std::endl;
std::cout << std::endl << "Final value of the constraints:" << std::endl;
for( Index i = 0; i < m; i++ )
{
std::cout << "g(" << i << ") = " << g[i] << std::endl;
}
}
2.10 main function
上述对Ipopt::TNLP的函数进行了重载,但是需要编写调用Ipopt的函数来执行求解。
#include "IpIpoptApplication.hpp"
#include "hs071_nlp.hpp"
#include <iostream>
using namespace Ipopt;
int main(
int /*argv*/,
char** /*argc*/
)
{
// Create a new instance of your nlp
// (use a SmartPtr, not raw)
SmartPtr<TNLP> mynlp = new HS071_NLP();
// Create a new instance of IpoptApplication
// (use a SmartPtr, not raw)
// We are using the factory, since this allows us to compile this
// example with an Ipopt Windows DLL
SmartPtr<IpoptApplication> app = IpoptApplicationFactory();
// Change some options
// Note: The following choices are only examples, they might not be
// suitable for your optimization problem.
app->Options()->SetNumericValue("tol", 3.82e-6);
app->Options()->SetStringValue("mu_strategy", "adaptive");
app->Options()->SetStringValue("output_file", "ipopt.out");
// The following overwrites the default name (ipopt.opt) of the options file
// app->Options()->SetStringValue("option_file_name", "hs071.opt");
// Initialize the IpoptApplication and process the options
ApplicationReturnStatus status;
status = app->Initialize();
if( status != Solve_Succeeded )
{
std::cout << std::endl << std::endl << "*** Error during initialization!" << std::endl;
return (int) status;
}
// Ask Ipopt to solve the problem
status = app->OptimizeTNLP(mynlp);
if( status == Solve_Succeeded )
{
std::cout << std::endl << std::endl << "*** The problem solved!" << std::endl;
}
else
{
std::cout << std::endl << std::endl << "*** The problem FAILED!" << std::endl;
}
// As the SmartPtrs go out of scope, the reference count
// will be decremented and the objects will automatically
// be deleted.
return (int) status;
}
本文介绍了Ipopt求解大规模非线性最优化问题的方法,包括优化问题的形式、所需信息及C++接口实现。
6791

被折叠的 条评论
为什么被折叠?



