CARVIEW |
Navigation Menu
-
Notifications
You must be signed in to change notification settings - Fork 24.7k
Add Vectorized<c10::BFloat16> specialization for ARM #139090
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
When we have hardware support, we can use it. When we don't have hardware support, we can still do better than vec_base.h. I'm not sure to what extent we're set up to properly test both `defined(__ARM_FEATURE_BF16)` and `!defined(__ARM_FEATURE_BF16)` builds, feedback especially welcome there. Differential Revision: [D64997747](https://our.internmc.facebook.com/intern/diff/D64997747/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/139090
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 47201a1 with merge base 419a7e1 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D64997747 |
… ARM" When we have hardware support, we can use it. When we don't have hardware support, we can still do better than vec_base.h. I'm not sure to what extent we're set up to properly test both `defined(__ARM_FEATURE_BF16)` and `!defined(__ARM_FEATURE_BF16)` builds, feedback especially welcome there. Differential Revision: [D64997747](https://our.internmc.facebook.com/intern/diff/D64997747/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D64997747 |
… ARM" When we have hardware support, we can use it. When we don't have hardware support, we can still do better than vec_base.h. I'm not sure to what extent we're set up to properly test both `defined(__ARM_FEATURE_BF16)` and `!defined(__ARM_FEATURE_BF16)` builds, feedback especially welcome there. Differential Revision: [D64997747](https://our.internmc.facebook.com/intern/diff/D64997747/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D64997747 |
… ARM" When we have hardware support, we can use it. When we don't have hardware support, we can still do better than vec_base.h. I'm not sure to what extent we're set up to properly test both `defined(__ARM_FEATURE_BF16)` and `!defined(__ARM_FEATURE_BF16)` builds, feedback especially welcome there. Differential Revision: [D64997747](https://our.internmc.facebook.com/intern/diff/D64997747/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D64997747 |
When we have hardware support, we can use it. When we don't have hardware support, we can still do better than vec_base.h. I'm not sure to what extent we're set up to properly test both `defined(__ARM_FEATURE_BF16)` and `!defined(__ARM_FEATURE_BF16)` builds, feedback especially welcome there. Testing: vec_test_all_types should cover correctness. For perf, seems clear that using vectorized intrinsics should be better than vec_base? Differential Revision: [D64997747](https://our.internmc.facebook.com/intern/diff/D64997747/) Pull Request resolved: pytorch#139090 Approved by: https://github.com/jgong5, https://github.com/malfet ghstack dependencies: pytorch#139084
…torch#139558) Discovered this bug when working on Vectorized<BFloat16>; apparently we have no automated testing for aarch64 without FP16. Testing: Manually disable FP16 feature for local vec_test_all_types run on Mac; see pass. Differential Revision: [D65385267](https://our.internmc.facebook.com/intern/diff/D65385267/) Pull Request resolved: pytorch#139558 Approved by: https://github.com/malfet ghstack dependencies: pytorch#139084, pytorch#139090
…rch#139081) Following the previous move of fp16_gemv_trans. Testing: Checked for performance regression with llm_benchmarks' `python benchmarks/benchmark_torch_mm.py llm`, didn't find one Differential Revision: [D64930872](https://our.internmc.facebook.com/intern/diff/D64930872/) Pull Request resolved: pytorch#139081 Approved by: https://github.com/malfet ghstack dependencies: pytorch#139084, pytorch#139090, pytorch#139558
pytorch#139208) Very similar to pytorch#137917, but for bf16. Differential Revision: [D65155971](https://our.internmc.facebook.com/intern/diff/D65155971/) Pull Request resolved: pytorch#139208 Approved by: https://github.com/malfet ghstack dependencies: pytorch#139084, pytorch#139090, pytorch#139558, pytorch#139081
This is the big milestone for bf16 and should enable us to close pytorch/torchchat#1253 . Testing: ran python torchchat.py generate llama3.2-1b --dtype bf16 --device cpu on x86 machine with AVX512-bf16. observed similar tokens/sec with and without MKL path hand-disabled. Also observed speedup from ~2.1 tok/sec to 7.4 tok/sec on x86 machine with only AVX2. Differential Revision: [D65170967](https://our.internmc.facebook.com/intern/diff/D65170967/) Pull Request resolved: pytorch#139220 Approved by: https://github.com/malfet ghstack dependencies: pytorch#139084, pytorch#139090, pytorch#139558, pytorch#139081, pytorch#139208
Fix typo causing compilation error on aarch64 architecture with BF16 support. (#139090) tag: @swolchok Pull Request resolved: #142370 Approved by: https://github.com/Skylion007, https://github.com/malfet
Stack from ghstack (oldest at bottom):
When we have hardware support, we can use it. When we don't have hardware support, we can still do better than vec_base.h. I'm not sure to what extent we're set up to properly test both
defined(__ARM_FEATURE_BF16)
and!defined(__ARM_FEATURE_BF16)
builds, feedback especially welcome there.Testing: vec_test_all_types should cover correctness. For perf, seems clear that using vectorized intrinsics should be better than vec_base?
Differential Revision: D64997747
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10