离散优化 Discrete Optimization

离散优化涉及贪婪算法、动态规划、分支限界、约束编程、局部搜索等策略。文章介绍了各种方法,如模拟退火、禁忌搜索、线性规划、混合整数规划,及其在解决复杂问题中的应用,如NP完全问题的处理和优化模型的构建。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Discrete Optimization

Reference: Discrete Optimization at Coursera

Optimization Methods in Management Science


Operation Research is an engineering and scientific approach for decision making

NP-Complete problems

  • Check a solution quickly in a polynomial time
  • If we can solve one NP-Complete quickly, we can solve them all

NP-Hard problems

  • As problem size increases, time cost grows exponentially
  • Lower standards and find high quality solution, push the exponential

Problem Modeling

  • decision variables
  • problem constraints
  • objective function

Greedy algorithm / Heuristics

use a greedy strategy in every split sub problem


Dynamic Programming

  • divide and conquer

  • bottom-up computation

  • decision table, small to large, trace back to get an optimal solution


Branch and Bound

  • branching: split the problem into subproblems like in exhaustive search

  • bounding: find an optimistic estimate of the best solution to the subproblem

  • build a tree, lower bound (min) and upper bound (max)
    m a x i m i z e 45 x 1 + 48 x 2 + 35 x 3 s u b j e c t   t o 5 x 1 + 8 x 2 + 3 x 3 ≤ 10 0 ≤ x i ≤ 1 ( i ∈ 1..3 ) maximize\quad 45x_1+48x_2+35x_3\\ subject\ to\quad 5x_1+8x_2+3x_3\leq 10\\ 0\leq x_i \leq 1\quad (i\in 1..3) maximize45x1+48x2+35x3subject to5x1+8x2+3x3100xi1(i1..3)

  • depth-first

    • prunes when a node estimation is worse than the found solution

    linear relaxation:

    V1/W1=9, V2/W2=6, V3/W3=11.7

    select items 3 and 1 and 1/4 of item 2, estimation = 35+45+12=92 = max

  • best-first

    • select the node with the best estimation

    • prunes when all the nodes are worse than the found solution

    relaxation: estimation = 45+35+48=128

  • least-discrepancy, Limited Discrepancy Search

    • trust a greedy heuristic

    • binary search tree, following the heuristic means branching left, branching right means the heuristic is wrong

    • avoid mistakes, explore the search space in increasing order of mistakes, trusting the heuristic less and less


Constraint Programming

  • Computational paradigm

    • use constraints to reduce the set of values that each variable can take
    • make a choice when no more deduction can be performed
    • if a choice is wrong, then backtrack and try another value
  • Branch and Prune

  • Constraint propagation: feasibility checking and pruning

  • Break value symmetry

  • Reduce the search space using global constraint, redundant constraint

  • All different
    use directed bipartite graph, vertex set for variables, vertex set for values

  • First-fail principle

    try first where you are the most likely to fail,

    • choose the variable with the smallest domain

    • choose the value that leaves as many options as possible to the other variables

Local Search

  • move from configurations to configurations by performing local moves

  • works with complete assignments to the decision variables and modify them

  • how to select a move

    max/min conflict

    • choose a decision variable that appear in the most violations
    • assign to a value that minimized its violations
  • local minima, no guarantees for global optimality

Simulated Annealing

  • local search extended meta heuristic

    1. start with a high temperature, essentially a random walk
    2. decrease the temperature progressively
    3. accept a new solution with a probability exp(-(Enew-E)/T)

Tabu Search

  • maintain the sequence of nodes already visited

  • select the best configurations that is not tabu, has not been visited before

  • expensive to maintain all the visited nodes

    • short term memory, only keep a small suffix of vsited nodes
    • change the tabu size dynamically, decrease when the selected node degrades the objective, increase when improves
  • store the transitions, not the states

  • Intensification

    store high quality solutions and return to them periodically

  • Diversification

    when the search is not producing improvement, diversify the current state

  • Strategic oscillation

    change the percentage of time spent in the feasible and infeasible regions


Linear Programming

linear objectives and linear equalities and inequalities

Every point in a polytope is a convex combination of its vertices
m i n c 1 x 1 + . . . + c n x n s u b j e c t   t o a 11 x 1 + . . . + a 1 n x n ≤ b 1 . . . a m 1 x 1 + . . . + a 1 n x n ≤ b 1 x i ≥ 0 ( i ∈ 1.. n ) min\quad c_1 x_1+...+c_n x_n\\ subject\ to\quad a_{11} x_1+...+a_{1n} x_n\leq b_1\\ ...\\ a_{m1} x_1+...+a_{1n} x_n\leq b_1\\ x_i \geq 0\quad (i\in 1..n) minc1x1+...+cnxn

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值