Wiki Ref: http://en.wikipedia.org/wiki/Lagrange_multipliers
general formulation: The weak Lagrangian principle
Denote the objective function by and let the constraints be given by
, perhaps by moving constants to the left, as in
. The domain of f should be an open set containing all points satisfying the constraints. Furthermore, f and the gk must have continuous first partial derivatives and the gradients of the gk must not be zero on the domain.[1] Now, define the Lagrangian, Λ, as
-
- k is an index for variables and functions associated with a particular constraint, k.
-
without a subscript indicates the vector with elements
, which are taken to be independent variables.
Observe that both the optimization criteria and constraints gk(x) are compactly encoded as stationary points of the Lagrangian:
-
if and only if
-
means to take the gradient only with respect to each element in the vector
, instead of all variables.
and
-
implies gk = 0.
Collectively, the stationary points of the Lagrangian,
-
,
give a number of unique equations totaling the length of plus the length of
. This often makes it possible to solve for every x and λk, without inverting the gk.[1] For this reason, the Lagrange multiplier method can be useful in situations where it is easier to find derivatives of the constraint functions than to invert them.
Often the Lagrange multipliers have an interpretation as some salient quantity of interest. To see why this might be the case, observe that:
So, λk is the rate of change of the quantity being optimized as a function of the constraint variable. As examples, in Lagrangian mechanics the equations of motion are derived by finding stationary points of the action, the time integral of the difference between kinetic and potential energy. Thus, the force on a particle due to a scalar potential, F = −∇V, can be interpreted as a Lagrange multiplier determining the change in action (transfer of potential to kinetic energy) following a variation in the particle's constrained trajectory. In economics, the optimal profit to a player is calculated subject to a constrained space of actions, where a Lagrange multiplier is the value of relaxing a given constraint (e.g. through bribery or other means).
The method of Lagrange multipliers is generalized by the Karush-Kuhn-Tucker conditions.
Wiki Ref: http://en.wikipedia.org/wiki/Karush-Kuhn-Tucker_conditions
In mathematics, the Karush-Kuhn-Tucker conditions (also known as the Kuhn-Tucker or the KKT conditions) are necessary for a solution in nonlinear programming to be optimal. It is a generalization of the method of Lagrange multipliers.
Let us consider the following nonlinear optimization problem:
where f(x) is the function to be minimized, are the nonequality constraints and
are the equality constraints, and m and l are the number of nonequality and equality constraints, respectively.
The necessary conditions for this inequality constrained problem were first published in the Masters thesis of William Karush[1], although they became renowned after a seminal conference paper by Harold W. Kuhn and Albert W. Tucker.[2]
Necessary conditions
Suppose that the objective function, i.e., the function to be minimized, is and the constraint functions are
and
. Further, suppose they are continuously differentiable at a point x * . If x * is a local minimum, then there exist constants
and
such that [3]