You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Multipack sampler is designed for padding-free distributed training of large language models. It utilizes an approximate solution to the identical machine scheduling problem to maximize the efficiency of batch processing. On the OpenChat V1 training set, it achieves >99% theoretical efficiency, while the interleaved sampler only achieves ~75%.
V2 Update
Multipack V2 optimized the packing algorithm complexity from O(n k log n) down to O(n log k log n) without degrading the packing efficiency, achieving better throughput for a large number of nodes.
The V2 release also has two variants with different packing optimization objective:
MultipackDistributedBatchSampler: Designed for models with quadratic attention. It will try to optimize packing efficiency as well as balance long/short sequences between each nodes, to minimize the difference of quadratic load.
MultipackDistributedBatchSampler_LinearAttention: For models with linear attention. Only consider packing efficiency and performs better on it than Quadratic variant, however this algorithm tends to put all long sequences into one node.
Benchmark
Please refer to test_multipack.ipynb
Efficiency: Percentage of actual batch size to max batch size
= number of tokens per batch / max capacity of tokens per batch
Utilization: all nodes waiting for the slowest node
= number of tokens per batch / max number of tokens on a single node * node count
L^2 lag: sqrt(max over node(sum length^2) - min over node(sum length^2))