You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Pcard-76459
Add a pass before the fused_linear pass to promote the performance. This pass mainly solve the following scenario when enabling the MP or TP parallelism:
The origin linear operator as follows:
matmul --> add
After enabling MPor TP, some linear operators may become as follows:
matmul --> comm_op --> add
The communication operator prohibits the matmul and add from being fused.
experiment
Take the experiment on the GPT-3 with 6.7B parameters using the single host with 8 V100 GPUs:
你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册。
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.
The unittest 'test_auto_parallel_fused_linear_promotion_pass.py' has been added. But the machine of CI/CE can't run the test, because the 'fused_gemm_epilogue' requires the CUDA11.6 at least. I have run the test by myself:
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
PR types
Performance optimization
PR changes
Others
Description
Pcard-76459
Add a pass before the
fused_linear
pass to promote the performance. This pass mainly solve the following scenario when enabling the MP or TP parallelism:linear
operator as follows:MP
orTP
, somelinear
operators may become as follows:The communication operator prohibits the
matmul
andadd
from being fused.experiment
Take the experiment on the GPT-3 with 6.7B parameters using the single host with 8 V100 GPUs: