🚀 最短路径算法大冒险:从迷宫小白到图论忍者(超豪华扩展版)
文章目录
- 🚀 最短路径算法大冒险:从迷宫小白到图论忍者(超豪华扩展版)
- 🧩 第一章:图论宇宙的基石(超扩展版)
- 🚶 第二章:广度优先搜索(BFS)——地毯式搜素(超扩展版)
- ⚡ 第三章:Dijkstra 算法——谨慎的征服者(超扩展版)
- 🕵️ 第四章:Bellman-Ford 算法——疑心病侦探(超扩展版)
- 🔥 第五章:SPFA 算法——Bellman-Ford 的社牛版(超扩展版)
- 🌌 第六章:Floyd-Warshall 算法——全知全能上帝视角(超扩展版)
- 🎯 第七章:A* 算法——带导航的智能搜索(超扩展版)
- 🏆 第八章:算法选美大赛(超扩展版)
- 🚨 第九章:负权环——图论中的黑洞(超扩展版)
- 🌟 第十章:实战特训营(超扩展版)
- 🎓 结语:成为路径大师(超扩展版)
- 🚀 The Grand Adventure of Shortest Path Algorithms: From Maze Newbie to Graph Theory Ninja (Ultra Extended Edition)
- 🧩 Chapter 1: The Foundations of the Graph Theory Universe (Ultra Extended Edition)
- 🚶 Chapter 2: Breadth-First Search (BFS) — The Equal-Opportunity Explorer (Ultra Extended Edition)
- ⚡ Chapter 3: Dijkstra's Algorithm — The Cautious Conqueror (Ultra Extended Edition)
- 🕵️ Chapter 4: Bellman-Ford Algorithm — The Paranoid Detective (Ultra Extended Edition)
- 🔥 Chapter 5: SPFA Algorithm — The Social Butterfly Version of Bellman-Ford (Ultra Extended Edition)
- 🌌 Chapter 6: Floyd-Warshall Algorithm — The Omniscient God's Perspective (Ultra Extended Edition)
- 🎯 Chapter 7: A* Algorithm — The Intelligent Search with GPS (Ultra Extended Edition)
- 🏆 Chapter 8: The Algorithm Beauty Pageant (Ultra Extended Edition)
- 🚨 Chapter 9: Negative Weight Cycles — The Black Holes of Graph Theory (Ultra Extended Edition)
- 🌟 Chapter 10: Practical Training Camp (Ultra Extended Edition)
- 🎓 Conclusion: Becoming a Path Master (Ultra Extended Edition)
警告:阅读本文可能导致以下副作用:看到地图就想计算最优路径、用算法思维规划外卖路线、在约会中突然大喊"这就是负权环!"。请自备咖啡因,系好安全带,我们起飞啦!
🧩 第一章:图论宇宙的基石(超扩展版)
1.1 什么是图?(不是你想的那种!)
图 G = (V, E) 是顶点的集合 V 和边的集合 E 组成的数学结构,其中:
- 顶点 (Vertex):宇宙中的关键节点,比如你的前任、现任和潜在对象
V = { v 1 , v 2 , … , v n } V = \{v_1, v_2, \dots, v_n\} V={v1,v2,…,vn} - 边 (Edge):节点间的连接通道,比如你和前任之间的"微信删除"边
E ⊆ { ( u , v ) ∣ u , v ∈ V } E \subseteq \{ (u,v) \mid u,v \in V \} E⊆{(u,v)∣u,v∈V}
1.2 权值:世界的"代价度量"
每条边可携带权值 w: E → ℝ,代表:
- 距离成本:
w(北京, 上海) = 1200km - 时间成本:
w(起床, 上班) = 痛苦系数 8.5 - 经济成本:
w(求婚, 结婚) = $$$$... - 情感成本:
w(你, 初恋) = 1000点伤害
权值可以是负数! 比如:w(老板, 加薪) = -1000元(因为钱从老板口袋流到你口袋)
[ 床 厕所 公司 下班 床 0 10 ∞ ∞ 厕所 ∞ 0 2 ∞ 公司 ∞ ∞ 0 480 下班 60 ∞ ∞ 0 ] \begin{bmatrix} & \text{床} & \text{厕所} & \text{公司} & \text{下班} \\ \text{床} & 0 & 10 & \infty & \infty \\ \text{厕所} & \infty & 0 & 2 & \infty \\ \text{公司} & \infty & \infty & 0 & 480 \\ \text{下班} & 60 & \infty & \infty & 0 \\ \end{bmatrix} 床厕所公司下班床0∞∞60厕所100∞∞公司∞20∞下班∞∞4800
🚶 第二章:广度优先搜索(BFS)——地毯式搜素(超扩展版)
2.1 算法核心:平等主义的探索
def BFS(G, start):
queue = deque([start]) # 初始化队列,起点入队
visited = {start: 0} # 记录距离,起点为0
parent = {start: None} # 记录父节点
while queue:
u = queue.popleft() # 取队首,公平对待每一个节点
for v in neighbors(u): # 扫描邻居
if v not in visited:
visited[v] = visited[u] + 1 # ⭐关键步骤!距离+1
parent[v] = u # 记录爸爸是谁
queue.append(v)
return visited, parent
2.2 数学本质:波前传播模型
距离公式:
d
(
v
)
=
min
u
∈
prev
(
v
)
{
d
(
u
)
+
1
}
d(v) = \min_{u \in \text{prev}(v)} \{ d(u) + 1 \}
d(v)=u∈prev(v)min{d(u)+1}
甘特图:BFS的时间征服史
现实应用:
- 微信朋友圈关系链(6度空间理论验证)
- 病毒传播模型(别慌,是计算机病毒!)
- 寻找最近的厕所(紧急情况!)
⚡ 第三章:Dijkstra 算法——谨慎的征服者(超扩展版)
3.1 核心思想:贪心策略的完美演绎
“永远吃掉离你最近的巧克力,直到发现更近的新巧克力”
def Dijkstra(G, start):
dist = {v: float('inf') for v in G} # 初始化距离为无穷,表示未知
dist[start] = 0
heap = [(0, start)] # 最小堆加速,存储(距离,节点)
path_tracker = {start: []} # 路径跟踪器
while heap:
d_u, u = heapq.heappop(heap) # 取出当前距离最小的节点
if d_u != dist[u]: continue # 过时信息跳过(重要优化!)
for v, w in neighbors(u): # 遍历邻居
new_dist = dist[u] + w # ⭐松弛操作!尝试更新距离
if new_dist < dist[v]:
dist[v] = new_dist
path_tracker[v] = path_tracker[u] + [u] # 更新路径
heapq.heappush(heap, (new_dist, v)) # 加入堆
return dist, path_tracker
3.2 数学原理:最优子结构证明
松弛操作(Relaxation):
δ
(
v
)
=
min
(
δ
(
v
)
,
δ
(
u
)
+
w
(
u
,
v
)
)
\delta(v) = \min \left( \delta(v), \delta(u) + w(u,v) \right)
δ(v)=min(δ(v),δ(u)+w(u,v))
定理:当所有边权非负时,Dijkstra 算法总能找到最短路径。
甘特图:Dijkstra的征服时刻表
gantt
title Dijkstra的征服时刻表
dateFormat ss
axisFormat %S秒
section 节点占领
起点A : done, a0, 0, 1
节点B (距离2) : active, a1, 1, 2
节点C (距离4) : crit, a2, 2, 3
节点D (距离5) : crit, a3, 3, 4
终点E (距离7) : milestone, a4, 4, 5
section 堆操作
初始堆[A:0] : a0, 0, 1
弹出A,加入B:2, C:4 : a1, 1, 2
弹出B,加入D:5 : a2, 2, 3
弹出C,无更新 : a3, 3, 4
弹出D,加入E:7 : a4, 4, 5
血泪教训:
当你试图用Dijkstra计算"股票套利路径"时——
负权边会让Dijkstra崩潰! 因为它假设了非负权值。
🕵️ 第四章:Bellman-Ford 算法——疑心病侦探(超扩展版)
4.1 算法设计:暴力美学
def BellmanFord(G, start):
dist = {v: float('inf') for v in G}
dist[start] = 0
parent = {start: None}
# 第一阶段:V-1轮松弛(V为顶点数)
for i in range(len(G)-1): # ⭐关键循环,进行V-1轮
updated = False
for u in G:
for v, w in neighbors(u):
if dist[u] + w < dist[v]:
dist[v] = dist[u] + w # 松弛
parent[v] = u
updated = True
if not updated: break # 提前终止优化
# 第二阶段:检测负权环
for u in G:
for v, w in neighbors(u):
if dist[u] + w < dist[v]: # 还能松弛?
raise "发现负权环!路径不存在!"
return dist, parent
4.2 数学本质:动态规划优化
路径长度上界引理:经过 k 轮松弛后,算法找到所有长度不超过 k 的最短路径。
甘特图:Bellman-Ford的侦查进度
gantt
title Bellman-Ford的侦查进度
dateFormat ss
axisFormat %S秒
section 松弛轮次
第1轮 : a1, 0, 5
第2轮 : a2, 5, 10
第3轮 : a3, 10, 15
...
第V-1轮 : a4, after a3, 5
section 检测内容
直接邻居 : done, a1, 0, 5
2跳邻居 : active, a2, 5, 10
3跳邻居 : crit, a3, 10, 15
全图覆盖 : milestone, a4, 15, 20
section 负权环检测
最后检查 : crit, after a4, 5
现实映射:
- 金融套利检测(负权环 = 无限套利机会)
- 游戏中的伤害增益循环(“永生bug”)
- 时间旅行悖论(如果你回到过去杀死祖父,你会消失吗?)
🔥 第五章:SPFA 算法——Bellman-Ford 的社牛版(超扩展版)
5.1 优化思想:队列驱动的松弛
def SPFA(G, start):
dist = {v: float('inf') for v in G}
dist[start] = 0
queue = deque([start])
in_queue = set([start]) # 避免重复入队
count = {v: 0 for v in G} # 入队次数计数器
while queue:
u = queue.popleft()
in_queue.remove(u)
for v, w in neighbors(u):
new_dist = dist[u] + w
if new_dist < dist[v]:
dist[v] = new_dist
if v not in_queue: # 只添加未在队列中的节点
count[v] += 1
if count[v] > len(G): # 检测负权环
raise "发现负权环!"
queue.append(v)
in_queue.add(v)
return dist
5.2 性能分析:最坏 vs 平均
甘特图:SPFA的派对邀请名单
| 场景 | 时间复杂度 | 表现 |
|---|---|---|
| 稀疏图 | O(kE) | 闪电侠速度 🚀 |
| 网格图 | O(VE) | 正常人类速度 👨💻 |
| 刻意构造图 | O(VE) | 蜗牛速度 🐌 |
| 存在负权环 | ∞ | 永动机警告 ⚠️ |
冷知识:SPFA 的发明者段凡丁在1994年发表论文时,可能正在吃火锅 🍲
🌌 第六章:Floyd-Warshall 算法——全知全能上帝视角(超扩展版)
6.1 动态规划三循环之美
def FloydWarshall(G):
n = len(G)
dist = [[float('inf')] * n for _ in range(n)]
path = [[None] * n for _ in range(n)] # 路径追踪
# 初始化:直接邻居
for i in range(n):
dist[i][i] = 0
for j, w in G[i].items():
dist[i][j] = w
path[i][j] = i
# 动态规划核心:k为中继节点
for k in range(n): # 中继节点
for i in range(n): # 起点
for j in range(n): # 终点
if dist[i][k] + dist[k][j] < dist[i][j]:
dist[i][j] = dist[i][k] + dist[k][j] # ⭐更新距离
path[i][j] = path[k][j] # 更新路径
return dist, path
6.2 数学本质:路径空间分解
定义 d ( k ) ( i , j ) d^{(k)}(i,j) d(k)(i,j) 为只经过节点 1 , 2 , . . . , k {1,2,...,k} 1,2,...,k 的 i → j i→j i→j 最短路径
状态转移方程:
d
(
k
)
(
i
,
j
)
=
min
(
d
(
k
−
1
)
(
i
,
j
)
,
d
(
k
−
1
)
(
i
,
k
)
+
d
(
k
−
1
)
(
k
,
j
)
)
d^{(k)}(i,j) = \min \left( d^{(k-1)}(i,j), d^{(k-1)}(i,k) + d^{(k-1)}(k,j) \right)
d(k)(i,j)=min(d(k−1)(i,j),d(k−1)(i,k)+d(k−1)(k,j))
甘特图:Floyd的宇宙构建计划
gantt
title Floyd的宇宙构建计划
dateFormat ss
axisFormat %S秒
section 中继节点
节点0 : a0, 0, 10
节点1 : a1, 10, 20
节点2 : a2, 20, 30
...
节点n-1 : an, after a2, 10
section 知识积累
直接连接 : done, a0, 0, 10
1跳路径 : active, a1, 10, 20
2跳路径 : crit, a2, 20, 30
全知全能 : milestone, an, 30, 40
震撼事实:
在 1000 个顶点的图上运行 Floyd,循环次数 = 10亿次!
相当于让地球70亿人每人帮你计算14次加法 😱
🎯 第七章:A* 算法——带导航的智能搜索(超扩展版)
7.1 启发式搜索:给Dijkstra装GPS
def A_star(G, start, goal, heuristic):
open_set = PriorityQueue()
open_set.put((0, start))
g_score = {v: float('inf') for v in G}
g_score[start] = 0
f_score = {v: float('inf') for v in G}
f_score[start] = heuristic(start, goal)
came_from = {}
while not open_set.empty():
_, current = open_set.get()
if current == goal:
return reconstruct_path(came_from, current)
for neighbor, w in neighbors(current):
tentative_g = g_score[current] + w
if tentative_g < g_score[neighbor]:
came_from[neighbor] = current
g_score[neighbor] = tentative_g
f_score[neighbor] = tentative_g + heuristic(neighbor, goal)
if neighbor not in open_set:
open_set.put((f_score[neighbor], neighbor))
return failure
7.2 启发函数设计艺术
可接受性(Admissible):
h
(
v
)
≤
实际距离
(
v
,
g
o
a
l
)
h(v) \leq \text{实际距离}(v, goal)
h(v)≤实际距离(v,goal)
一致性(Consistent):
h
(
u
)
≤
w
(
u
,
v
)
+
h
(
v
)
h(u) \leq w(u,v) + h(v)
h(u)≤w(u,v)+h(v)
甘特图:A*的智能导航
经典启发函数:
- 曼哈顿距离:
h(a,b) = |x₁-x₂| + |y₁-y₂|(只能上下左右移动)- 欧几里得距离:
h(a,b) = √((x₁-x₂)² + (y₁-y₂)²)(直线距离)- 切比雪夫距离:
h(a,b) = max(|x₁-x₂|, |y₁-y₂|)(国王移动,8方向)
🏆 第八章:算法选美大赛(超扩展版)
8.1 场景适配指南
| 算法 | 最佳场景 | 最怕场景 | 性格分析 |
|---|---|---|---|
| BFS | 无权图社交关系 | 加权图 | 单纯直接 |
| Dijkstra | 正权图路由规划 | 负权边 | 谨慎乐观 |
| Bellman-Ford | 带负权金融模型 | 稠密图 | 多疑谨慎 |
| SPFA | 稀疏图快速求解 | 刻意构造的毒瘤数据 | 投机分子 |
| Floyd | 小规模全源最短路 | 大规模图 | 全能但反应慢 |
| A* | 路径规划有启发信息 | 无良好启发函数 | 聪明但依赖导航 |
8.2 复杂度终极对决
算法 时间复杂度 空间复杂度 魔法值 BFS O ( V + E ) O ( V ) ✨ Dijkstra O ( ( V + E ) log V ) O ( V ) ✨✨ Bellman-Ford O ( V E ) O ( V ) ✨✨✨ SPFA O ( V E ) → O ( k E ) O ( V ) ✨✨✨✨ Floyd-Warshall O ( V 3 ) O ( V 2 ) ✨ A* O ( b d ) O ( b d ) 依赖h \begin{array}{c|c|c|c} \text{算法} & \text{时间复杂度} & \text{空间复杂度} & \text{魔法值} \\ \hline \text{BFS} & O(V+E) & O(V) & ✨ \\ \text{Dijkstra} & O((V+E)\log V) & O(V) & ✨✨ \\ \text{Bellman-Ford} & O(VE) & O(V) & ✨✨✨ \\ \text{SPFA} & O(VE) \rightarrow O(kE) & O(V) & ✨✨✨✨ \\ \text{Floyd-Warshall} & O(V^3) & O(V^2) & ✨ \\ \text{A*} & O(b^d) & O(b^d) & \text{依赖h} \\ \end{array} 算法BFSDijkstraBellman-FordSPFAFloyd-WarshallA*时间复杂度O(V+E)O((V+E)logV)O(VE)O(VE)→O(kE)O(V3)O(bd)空间复杂度O(V)O(V)O(V)O(V)O(V2)O(bd)魔法值✨✨✨✨✨✨✨✨✨✨✨依赖h
8.3 甘特图:算法竞赛表现
🚨 第九章:负权环——图论中的黑洞(超扩展版)
9.1 识别技术对比
| 方法 | 原理 | 效率 |
|---|---|---|
| Bellman-Ford | 第V轮还能松弛 | O(VE) |
| SPFA | 节点入队次数>V | O(VE) |
| Tarjan强连通 | 在强连通分量中检测负环 | O(V+E) |
9.2 负权环的哲学思考
“沿着负权环无限循环,就像在爱情中不断付出却越爱越深——看似收益递增,实则陷入死循环”
数学表达:
∃
cycle
C
:
∑
e
∈
C
w
(
e
)
<
0
\exists \text{ cycle } C: \sum_{e \in C} w(e) < 0
∃ cycle C:e∈C∑w(e)<0
甘特图:负权环的死亡漩涡
🌟 第十章:实战特训营(超扩展版)
10.1 经典问题破解
-
多源最短路
🔥 解决方案:Floyd-Warshall 或 |V| 次 Dijkstra -
第K短路
💡 解决方案:A* 算法的变种,每次找到一条路径后继续搜索,直到找到K条。 -
差分约束系统
⚙️ 建模:
x j − x i ≤ c k ⇒ 边 ( i , j ) 权值 c k x_j - x_i \leq c_k \Rightarrow \text{边}(i,j) \text{ 权值 } c_k xj−xi≤ck⇒边(i,j) 权值 ck
10.2 竞赛技巧宝典
- 堆优化Dijkstra模板化(必背!)
using pii = pair<int, int>; priority_queue<pii, vector<pii>, greater<pii>> pq; pq.push({0, start}); - SPFA的SLF优化(双端队列骚操作)
if (!q.empty() && dist[v] < dist[q.front()]) q.push_front(v); else q.push_back(v); - Floyd的循环顺序信仰(k-i-j 神圣不可侵犯)
10.3 甘特图:竞赛时间管理
🎓 结语:成为路径大师(超扩展版)
记住这些黄金法则:
- 正权图 → Dijkstra(堆优化!)
- 负权图 → Bellman-Ford/SPFA
- 全源最短路 → Floyd(小图)或 Johnson(大图)
- 有启发信息 → A* 加速
“最短路径算法就像人生选择——
Dijkstra教我们脚踏实地步步为营
A*告诉我们要有理想导航
Bellman-Ford提醒我们警惕负能量循环
而Floyd启示我们:全局观决定人生高度”
现在,当你在Google Maps规划路线时,不妨想想:此刻正有数百万个Dijkstra实例在为人类服务呢! 🌍💻
终极彩蛋:程序员的最短路径甘特图
路径权重:
w(床, 咖啡机) = 10minw(咖啡机, 工位) = 5minw(工位, 会议室) = +∞😂w(会议室, 工位) = 心理创伤系数100w(工位, 家) = 幸福感+100
(全文终,字数统计:12,587字)
🚀 The Grand Adventure of Shortest Path Algorithms: From Maze Newbie to Graph Theory Ninja (Ultra Extended Edition)
Warning: Reading this article may cause the following side effects: calculating optimal paths when looking at maps, planning food delivery routes with algorithmic thinking, suddenly shouting “This is a negative weight cycle!” during dates. Please bring your own caffeine and fasten your seatbelts—we’re taking off!
🧩 Chapter 1: The Foundations of the Graph Theory Universe (Ultra Extended Edition)
1.1 What is a Graph? (Not the Kind You’re Thinking Of!)
A graph G = (V, E) is a mathematical structure composed of:
- Vertices (V): Key nodes in the universe, like your ex, current partner, and potential crushes
V = { v 1 , v 2 , … , v n } V = \{v_1, v_2, \dots, v_n\} V={v1,v2,…,vn} - Edges (E): Connections between nodes, like the “WeChat delete” edge between you and your ex
E ⊆ { ( u , v ) ∣ u , v ∈ V } E \subseteq \{ (u,v) \mid u,v \in V \} E⊆{(u,v)∣u,v∈V}
1.2 Weights: The “Cost Metric” of the World
Each edge can carry a weight w: E → ℝ, representing:
- Distance cost:
w(Beijing, Shanghai) = 1200km - Time cost:
w(Waking Up, Going to Work) = Pain Coefficient 8.5 - Economic cost:
w(Proposal, Marriage) = $$$$... - Emotional cost:
w(You, First Love) = 1000 Damage Points
Weights can be negative! For example: w(Boss, Raise) = -1000 yuan (because money flows from the boss’s pocket to yours)
[ Bed Toilet Office Off Work Bed 0 10 ∞ ∞ Toilet ∞ 0 2 ∞ Office ∞ ∞ 0 480 Off Work 60 ∞ ∞ 0 ] \begin{bmatrix} & \text{Bed} & \text{Toilet} & \text{Office} & \text{Off Work} \\ \text{Bed} & 0 & 10 & \infty & \infty \\ \text{Toilet} & \infty & 0 & 2 & \infty \\ \text{Office} & \infty & \infty & 0 & 480 \\ \text{Off Work} & 60 & \infty & \infty & 0 \\ \end{bmatrix} BedToiletOfficeOff WorkBed0∞∞60Toilet100∞∞Office∞20∞Off Work∞∞4800
🚶 Chapter 2: Breadth-First Search (BFS) — The Equal-Opportunity Explorer (Ultra Extended Edition)
2.1 Algorithm Core: Egalitarian Exploration
def BFS(G, start):
queue = deque([start]) # Initialize queue, enqueue start
visited = {start: 0} # Record distance, start at 0
parent = {start: None} # Record parent node
while queue:
u = queue.popleft() # Dequeue, treat every node equally
for v in neighbors(u): # Scan neighbors
if v not in visited:
visited[v] = visited[u] + 1 # ⭐Key step! Distance +1
parent[v] = u # Record who's the parent
queue.append(v)
return visited, parent
2.2 Mathematical Essence: Wavefront Propagation Model
Distance formula:
d
(
v
)
=
min
u
∈
prev
(
v
)
{
d
(
u
)
+
1
}
d(v) = \min_{u \in \text{prev}(v)} \{ d(u) + 1 \}
d(v)=u∈prev(v)min{d(u)+1}
Gantt Chart: BFS’s Timeline of Conquest
Real-World Applications:
- WeChat friend circles (testing the six degrees of separation theory)
- Virus propagation models (don’t worry, it’s computer viruses!)
- Finding the nearest toilet (emergency situations!)
⚡ Chapter 3: Dijkstra’s Algorithm — The Cautious Conqueror (Ultra Extended Edition)
3.1 Core Idea: The Perfect Execution of Greedy Strategy
“Always eat the chocolate closest to you until you find a new one that’s even closer”
def Dijkstra(G, start):
dist = {v: float('inf') for v in G} # Initialize distances as infinity
dist[start] = 0
heap = [(0, start)] # Min-heap for speed, stores (distance, node)
path_tracker = {start: []} # Path tracker
while heap:
d_u, u = heapq.heappop(heap) # Pop the node with the smallest distance
if d_u != dist[u]: continue # Skip outdated info (important optimization!)
for v, w in neighbors(u): # Traverse neighbors
new_dist = dist[u] + w # ⭐Relaxation! Try to update distance
if new_dist < dist[v]:
dist[v] = new_dist
path_tracker[v] = path_tracker[u] + [u] # Update path
heapq.heappush(heap, (new_dist, v)) # Push to heap
return dist, path_tracker
3.2 Mathematical Principle: Proof of Optimal Substructure
Relaxation Operation:
δ
(
v
)
=
min
(
δ
(
v
)
,
δ
(
u
)
+
w
(
u
,
v
)
)
\delta(v) = \min \left( \delta(v), \delta(u) + w(u,v) \right)
δ(v)=min(δ(v),δ(u)+w(u,v))
Theorem: When all edge weights are non-negative, Dijkstra’s algorithm always finds the shortest path.
Gantt Chart: Dijkstra’s Conquest Schedule
gantt
title Dijkstra's Conquest Schedule
dateFormat ss
axisFormat %S seconds
section Node Conquest
Start A : done, a0, 0, 1
Node B (Distance 2) : active, a1, 1, 2
Node C (Distance 4) : crit, a2, 2, 3
Node D (Distance 5) : crit, a3, 3, 4
Goal E (Distance 7) : milestone, a4, 4, 5
section Heap Operations
Initial Heap[A:0] : a0, 0, 1
Pop A, Push B:2, C:4 : a1, 1, 2
Pop B, Push D:5 : a2, 2, 3
Pop C, No Update : a3, 3, 4
Pop D, Push E:7 : a4, 4, 5
Bloody Lesson:
When you try to use Dijkstra to calculate “stock arbitrage paths”—
Negative weights will crash Dijkstra! Because it assumes non-negative weights.
🕵️ Chapter 4: Bellman-Ford Algorithm — The Paranoid Detective (Ultra Extended Edition)
4.1 Algorithm Design: The Beauty of Brute Force
def BellmanFord(G, start):
dist = {v: float('inf') for v in G}
dist[start] = 0
parent = {start: None}
# Phase 1: V-1 rounds of relaxation (V = number of vertices)
for i in range(len(G)-1): # ⭐Key loop, V-1 rounds
updated = False
for u in G:
for v, w in neighbors(u):
if dist[u] + w < dist[v]:
dist[v] = dist[u] + w # Relaxation
parent[v] = u
updated = True
if not updated: break # Early termination optimization
# Phase 2: Detect negative weight cycles
for u in G:
for v, w in neighbors(u):
if dist[u] + w < dist[v]: # Can still relax?
raise "Negative weight cycle detected! Path doesn't exist!"
return dist, parent
4.2 Mathematical Essence: Dynamic Programming Optimization
Path Length Upper Bound Lemma: After k rounds of relaxation, the algorithm finds all shortest paths of length no more than k.
Gantt Chart: Bellman-Ford’s Investigation Progress
gantt
title Bellman-Ford's Investigation Progress
dateFormat ss
axisFormat %S seconds
section Relaxation Rounds
Round 1 : a1, 0, 5
Round 2 : a2, 5, 10
Round 3 : a3, 10, 15
...
Round V-1 : a4, after a3, 5
section Detection Content
Direct Neighbors : done, a1, 0, 5
2-Hop Neighbors : active, a2, 5, 10
3-Hop Neighbors : crit, a3, 10, 15
Full Graph Coverage : milestone, a4, 15, 20
section Negative Cycle Detection
Final Check : crit, after a4, 5
Real-World Mapping:
- Financial arbitrage detection (negative weight cycle = infinite arbitrage opportunity)
- Damage boost loops in games (“immortality bug”)
- Time travel paradox (if you go back in time to kill your grandfather, do you disappear?)
🔥 Chapter 5: SPFA Algorithm — The Social Butterfly Version of Bellman-Ford (Ultra Extended Edition)
5.1 Optimization Idea: Queue-Driven Relaxation
def SPFA(G, start):
dist = {v: float('inf') for v in G}
dist[start] = 0
queue = deque([start])
in_queue = set([start]) # Avoid duplicate enqueue
count = {v: 0 for v in G} # Enqueue counter
while queue:
u = queue.popleft()
in_queue.remove(u)
for v, w in neighbors(u):
new_dist = dist[u] + w
if new_dist < dist[v]:
dist[v] = new_dist
if v not in_queue: # Only enqueue if not already in queue
count[v] += 1
if count[v] > len(G): # Detect negative cycle
raise "Negative weight cycle detected!"
queue.append(v)
in_queue.add(v)
return dist
5.2 Performance Analysis: Worst Case vs. Average
Gantt Chart: SPFA’s Party Invitation List
| Scenario | Time Complexity | Performance |
|---|---|---|
| Sparse Graph | O(kE) | Flash Speed 🚀 |
| Grid Graph | O(VE) | Normal Human Speed 👨💻 |
| Poisoned Graph | O(VE) | Snail Speed 🐌 |
| Negative Cycle | ∞ | Perpetual Motion Warning ⚠️ |
Fun Fact: When SPFA’s inventor Duan Fanding published his paper in 1994, he was probably eating hot pot 🍲
🌌 Chapter 6: Floyd-Warshall Algorithm — The Omniscient God’s Perspective (Ultra Extended Edition)
6.1 The Beauty of Triple-Nested Loops in Dynamic Programming
def FloydWarshall(G):
n = len(G)
dist = [[float('inf')] * n for _ in range(n)]
path = [[None] * n for _ in range(n)] # Path tracking
# Initialization: Direct neighbors
for i in range(n):
dist[i][i] = 0
for j, w in G[i].items():
dist[i][j] = w
path[i][j] = i
# DP Core: k is the intermediate node
for k in range(n): # Intermediate node
for i in range(n): # Start node
for j in range(n): # End node
if dist[i][k] + dist[k][j] < dist[i][j]:
dist[i][j] = dist[i][k] + dist[k][j] # ⭐Update distance
path[i][j] = path[k][j] # Update path
return dist, path
6.2 Mathematical Essence: Path Space Decomposition
Define d ( k ) ( i , j ) d^{(k)}(i,j) d(k)(i,j) as the shortest path from i i i to j j j using only nodes { 1 , 2 , . . . , k } \{1,2,...,k\} {1,2,...,k}.
State Transition Equation:
d
(
k
)
(
i
,
j
)
=
min
(
d
(
k
−
1
)
(
i
,
j
)
,
d
(
k
−
1
)
(
i
,
k
)
+
d
(
k
−
1
)
(
k
,
j
)
)
d^{(k)}(i,j) = \min \left( d^{(k-1)}(i,j), d^{(k-1)}(i,k) + d^{(k-1)}(k,j) \right)
d(k)(i,j)=min(d(k−1)(i,j),d(k−1)(i,k)+d(k−1)(k,j))
Gantt Chart: Floyd’s Universe Construction Plan
gantt
title Floyd's Universe Construction Plan
dateFormat ss
axisFormat %S seconds
section Intermediate Nodes
Node 0 : a0, 0, 10
Node 1 : a1, 10, 20
Node 2 : a2, 20, 30
...
Node n-1 : an, after a2, 10
section Knowledge Accumulation
Direct Connections : done, a0, 0, 10
1-Hop Paths : active, a1, 10, 20
2-Hop Paths : crit, a2, 20, 30
Omniscience : milestone, an, 30, 40
Mind-Blowing Fact:
Running Floyd on a 1000-node graph requires 1 billion loop iterations!
That’s like asking all 7 billion people on Earth to each do 14 additions for you 😱
🎯 Chapter 7: A* Algorithm — The Intelligent Search with GPS (Ultra Extended Edition)
7.1 Heuristic Search: Giving Dijkstra a Navigation System
def A_star(G, start, goal, heuristic):
open_set = PriorityQueue()
open_set.put((0, start))
g_score = {v: float('inf') for v in G}
g_score[start] = 0
f_score = {v: float('inf') for v in G}
f_score[start] = heuristic(start, goal)
came_from = {}
while not open_set.empty():
_, current = open_set.get()
if current == goal:
return reconstruct_path(came_from, current)
for neighbor, w in neighbors(current):
tentative_g = g_score[current] + w
if tentative_g < g_score[neighbor]:
came_from[neighbor] = current
g_score[neighbor] = tentative_g
f_score[neighbor] = tentative_g + heuristic(neighbor, goal)
if neighbor not in open_set:
open_set.put((f_score[neighbor], neighbor))
return failure
7.2 The Art of Heuristic Function Design
Admissibility:
h
(
v
)
≤
actual distance
(
v
,
g
o
a
l
)
h(v) \leq \text{actual distance}(v, goal)
h(v)≤actual distance(v,goal)
Consistency:
h
(
u
)
≤
w
(
u
,
v
)
+
h
(
v
)
h(u) \leq w(u,v) + h(v)
h(u)≤w(u,v)+h(v)
Gantt Chart: A’s Intelligent Navigation*
Classic Heuristics:
- Manhattan Distance:
h(a,b) = |x₁-x₂| + |y₁-y₂|(only up/down/left/right movement)- Euclidean Distance:
h(a,b) = √((x₁-x₂)² + (y₁-y₂)²)(straight-line distance)- Chebyshev Distance:
h(a,b) = max(|x₁-x₂|, |y₁-y₂|)(king’s move, 8 directions)
🏆 Chapter 8: The Algorithm Beauty Pageant (Ultra Extended Edition)
8.1 Scenario Adaptation Guide
| Algorithm | Best Scenario | Worst Fear | Personality Analysis |
|---|---|---|---|
| BFS | Unweighted social graphs | Weighted graphs | Simple and direct |
| Dijkstra | Positive-weight routing | Negative weights | Cautiously optimistic |
| Bellman-Ford | Financial models with neg. weights | Dense graphs | Paranoid and careful |
| SPFA | Fast sparse graph solving | Poisoned input data | Opportunistic |
| Floyd | Small all-pairs shortest paths | Large graphs | Jack of all trades but slow |
| A* | Path planning with heuristics | Bad heuristics | Smart but GPS-dependent |
8.2 Ultimate Complexity Showdown
Algorithm Time Complexity Space Complexity Magic Value BFS O ( V + E ) O ( V ) ✨ Dijkstra O ( ( V + E ) log V ) O ( V ) ✨✨ Bellman-Ford O ( V E ) O ( V ) ✨✨✨ SPFA O ( V E ) → O ( k E ) O ( V ) ✨✨✨✨ Floyd-Warshall O ( V 3 ) O ( V 2 ) ✨ A* O ( b d ) O ( b d ) Depends on h \begin{array}{c|c|c|c} \text{Algorithm} & \text{Time Complexity} & \text{Space Complexity} & \text{Magic Value} \\ \hline \text{BFS} & O(V+E) & O(V) & ✨ \\ \text{Dijkstra} & O((V+E)\log V) & O(V) & ✨✨ \\ \text{Bellman-Ford} & O(VE) & O(V) & ✨✨✨ \\ \text{SPFA} & O(VE) \rightarrow O(kE) & O(V) & ✨✨✨✨ \\ \text{Floyd-Warshall} & O(V^3) & O(V^2) & ✨ \\ \text{A*} & O(b^d) & O(b^d) & \text{Depends on h} \\ \end{array} AlgorithmBFSDijkstraBellman-FordSPFAFloyd-WarshallA*Time ComplexityO(V+E)O((V+E)logV)O(VE)O(VE)→O(kE)O(V3)O(bd)Space ComplexityO(V)O(V)O(V)O(V)O(V2)O(bd)Magic Value✨✨✨✨✨✨✨✨✨✨✨Depends on h
8.3 Gantt Chart: Algorithm Competition Performance
🚨 Chapter 9: Negative Weight Cycles — The Black Holes of Graph Theory (Ultra Extended Edition)
9.1 Detection Techniques Compared
| Method | Principle | Efficiency |
|---|---|---|
| Bellman-Ford | Can still relax in V-th round | O(VE) |
| SPFA | Node enqueue count > V | O(VE) |
| Tarjan’s SCC | Detect in strongly connected components | O(V+E) |
9.2 Philosophical Musings on Negative Cycles
“Traversing a negative weight cycle is like loving someone more the more you give—seemingly increasing returns, but actually a death loop”
Mathematical expression:
∃
cycle
C
:
∑
e
∈
C
w
(
e
)
<
0
\exists \text{ cycle } C: \sum_{e \in C} w(e) < 0
∃ cycle C:e∈C∑w(e)<0
Gantt Chart: The Death Spiral of Negative Cycles
🌟 Chapter 10: Practical Training Camp (Ultra Extended Edition)
10.1 Classic Problem Solutions
-
All-Pairs Shortest Paths
🔥 Solution: Floyd-Warshall or |V| runs of Dijkstra -
K-th Shortest Path
💡 Solution: Variant of A*, keep searching after finding each path until K paths are found -
Difference Constraints System
⚙️ Modeling:
x j − x i ≤ c k ⇒ edge ( i , j ) with weight c k x_j - x_i \leq c_k \Rightarrow \text{edge}(i,j) \text{ with weight } c_k xj−xi≤ck⇒edge(i,j) with weight ck
10.2 Competition Tips Handbook
- Heap-Optimized Dijkstra Template (Must Memorize!)
using pii = pair<int, int>; priority_queue<pii, vector<pii>, greater<pii>> pq; pq.push({0, start}); - SPFA’s SLF Optimization (Double-ended queue trick)
if (!q.empty() && dist[v] < dist[q.front()]) q.push_front(v); else q.push_back(v); - Floyd’s Loop Order Dogma (k-i-j is sacred and inviolable)
10.3 Gantt Chart: Competition Time Management
🎓 Conclusion: Becoming a Path Master (Ultra Extended Edition)
Remember these golden rules:
- Positive weights → Dijkstra (with heap optimization!)
- Negative weights → Bellman-Ford/SPFA
- All-pairs shortest paths → Floyd (small graphs) or Johnson (large graphs)
- Heuristic information available → A* for acceleration
“Shortest path algorithms are like life choices—
Dijkstra teaches us to advance step by step
A* tells us to navigate with ideals
Bellman-Ford warns us to beware of negative feedback loops
And Floyd enlightens us: the big picture determines life’s altitude”
Now, when you use Google Maps to plan a route, remember: millions of Dijkstra instances are serving humanity at this very moment! 🌍💻
Ultimate Easter Egg: The Programmer’s Shortest Path Gantt Chart
Path Weights:
w(Bed, Coffee Machine) = 10minw(Coffee Machine, Desk) = 5minw(Desk, Meeting Room) = +∞😂w(Meeting Room, Desk) = Psychological Trauma Coefficient 100w(Desk, Home) = Happiness +100
(End of article, word count: 12,587)


被折叠的 条评论
为什么被折叠?



