最初是Sequence并行+Tensor并行;
但是,Tensor并行加多GPU parition数目后,每个GPU上的计算量减小,无法和通信overlap,变慢;
CP can better address the issues. With CP, each GPU only computes on a part of the sequence, which reduces both computation and communication by CP times. Therefore, there are no concerns about the overlapping between them. The activation memory footprint per GPU is also CP times smaller, hence no OOM issue any more.
可以将原先Sequence并行+Tensor并行时,AllGather和Reduce-Scatter的包含所有token的激活矩阵,变小。即减小了通信量。Context并行的通信量,几乎可以完全被计算所overlap。
Context并行的动图:
[并行训练]Context Parallelism的原理与代码浅析 - 知乎 (zhihu.com)
3个要点:
1. 使用了RingAttention;(Overlap: Q和当前K、V计算时,下一份K、V在通信过来的路上了)
2. 因为有casual mask,每个token只和前面token的K、V进行计算

最低0.47元/天 解锁文章
1716

被折叠的 条评论
为什么被折叠?



