Discrete Optimization
Reference: Discrete Optimization at Coursera
Optimization Methods in Management Science
文章目录
Operation Research is an engineering and scientific approach for decision making
NP-Complete problems
- Check a solution quickly in a polynomial time
- If we can solve one NP-Complete quickly, we can solve them all
NP-Hard problems
- As problem size increases, time cost grows exponentially
- Lower standards and find high quality solution, push the exponential
Problem Modeling
- decision variables
- problem constraints
- objective function
Greedy algorithm / Heuristics
use a greedy strategy in every split sub problem
Dynamic Programming
-
divide and conquer
-
bottom-up computation
-
decision table, small to large, trace back to get an optimal solution
Branch and Bound
-
branching: split the problem into subproblems like in exhaustive search
-
bounding: find an optimistic estimate of the best solution to the subproblem
-
build a tree, lower bound (min) and upper bound (max)
m a x i m i z e 45 x 1 + 48 x 2 + 35 x 3 s u b j e c t t o 5 x 1 + 8 x 2 + 3 x 3 ≤ 10 0 ≤ x i ≤ 1 ( i ∈ 1..3 ) maximize\quad 45x_1+48x_2+35x_3\\ subject\ to\quad 5x_1+8x_2+3x_3\leq 10\\ 0\leq x_i \leq 1\quad (i\in 1..3) maximize45x1+48x2+35x3subject to5x1+8x2+3x3≤100≤xi≤1(i∈1..3) -
depth-first
- prunes when a node estimation is worse than the found solution
linear relaxation:
V1/W1=9, V2/W2=6, V3/W3=11.7
select items 3 and 1 and 1/4 of item 2, estimation = 35+45+12=92 = max
-
best-first
-
select the node with the best estimation
-
prunes when all the nodes are worse than the found solution
relaxation: estimation = 45+35+48=128
-
-
least-discrepancy, Limited Discrepancy Search
-
trust a greedy heuristic
-
binary search tree, following the heuristic means branching left, branching right means the heuristic is wrong
-
avoid mistakes, explore the search space in increasing order of mistakes, trusting the heuristic less and less
-

Constraint Programming
-
Computational paradigm
- use constraints to reduce the set of values that each variable can take
- make a choice when no more deduction can be performed
- if a choice is wrong, then backtrack and try another value
-
Branch and Prune
-
Constraint propagation: feasibility checking and pruning
-
Break value symmetry
-
Reduce the search space using global constraint, redundant constraint
-
All different
use directed bipartite graph, vertex set for variables, vertex set for values
-
First-fail principle
try first where you are the most likely to fail,
-
choose the variable with the smallest domain
-
choose the value that leaves as many options as possible to the other variables
-
Local Search
-
move from configurations to configurations by performing local moves
-
works with complete assignments to the decision variables and modify them
-
how to select a move
max/min conflict
- choose a decision variable that appear in the most violations
- assign to a value that minimized its violations
-
local minima, no guarantees for global optimality
Simulated Annealing
-
local search extended meta heuristic
- start with a high temperature, essentially a random walk
- decrease the temperature progressively
- accept a new solution with a probability exp(-(Enew-E)/T)
Tabu Search
-
maintain the sequence of nodes already visited
-
select the best configurations that is not tabu, has not been visited before
-
expensive to maintain all the visited nodes
- short term memory, only keep a small suffix of vsited nodes
- change the tabu size dynamically, decrease when the selected node degrades the objective, increase when improves
-
store the transitions, not the states
-
Intensification
store high quality solutions and return to them periodically
-
Diversification
when the search is not producing improvement, diversify the current state
-
Strategic oscillation
change the percentage of time spent in the feasible and infeasible regions
Linear Programming
linear objectives and linear equalities and inequalities
Every point in a polytope is a convex combination of its vertices
m i n c 1 x 1 + . . . + c n x n s u b j e c t t o a 11 x 1 + . . . + a 1 n x n ≤ b 1 . . . a m 1 x 1 + . . . + a 1 n x n ≤ b 1 x i ≥ 0 ( i ∈ 1.. n ) min\quad c_1 x_1+...+c_n x_n\\ subject\ to\quad a_{11} x_1+...+a_{1n} x_n\leq b_1\\ ...\\ a_{m1} x_1+...+a_{1n} x_n\leq b_1\\ x_i \geq 0\quad (i\in 1..n) minc1x1+...+cnxn