CARVIEW |
Navigation Menu
-
Notifications
You must be signed in to change notification settings - Fork 24.7k
Fix "expand: SymIntArrayRef expected to contain only concrete integers" in AOTInductor #135933
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/135933
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit b33d98d with merge base failed to retrieve merge base, please contact dev infra: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…s" in AOTInductor Internal xref: https://fb.workplace.com/groups/1075192433118967/permalink/1501860707118802/ Signed-off-by: Edward Z. Yang <ezyang@meta.com> ghstack-source-id: 46fee3b Pull Request resolved: #135933
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary: - Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch - Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists. Note, include these PyTorch changes for AOTI export: pytorch/pytorch#135933 Test Plan: ``` >>> import torch >>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache # noqa # usort: skip >>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu") >>> img = torch.ones([3, 600, 800]) >>> canvas_size = torch.tensor([448, 448]) >>> target_size = torch.tensor([336, 448]) >>> res = x(img, target_size, canvas_size) >>> res[0].shape torch.Size([4, 3, 224, 224]) >>> res[1] tensor([2, 2]) >>> ``` Reviewed By: larryliu0820 Differential Revision: D62651605 Pulled By: lucylq
Summary: - Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch - Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists. Note, include these PyTorch changes for AOTI export: pytorch/pytorch#135933 Test Plan: ``` >>> import torch >>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache # noqa # usort: skip >>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu") >>> img = torch.ones([3, 600, 800]) >>> canvas_size = torch.tensor([448, 448]) >>> target_size = torch.tensor([336, 448]) >>> res = x(img, target_size, canvas_size) >>> res[0].shape torch.Size([4, 3, 224, 224]) >>> res[1] tensor([2, 2]) >>> ``` Reviewed By: larryliu0820 Differential Revision: D62651605 Pulled By: lucylq
Summary: - Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch - Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists. Note, include these PyTorch changes for AOTI export: pytorch/pytorch#135933 Test Plan: ``` >>> import torch >>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache # noqa # usort: skip >>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu") >>> img = torch.ones([3, 600, 800]) >>> canvas_size = torch.tensor([448, 448]) >>> target_size = torch.tensor([336, 448]) >>> res = x(img, target_size, canvas_size) >>> res[0].shape torch.Size([4, 3, 224, 224]) >>> res[1] tensor([2, 2]) >>> ``` Reviewed By: larryliu0820 Differential Revision: D62651605 Pulled By: lucylq
Summary: - Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch - Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists. Note, include these PyTorch changes for AOTI export: pytorch/pytorch#135933 Test Plan: ``` >>> import torch >>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache # noqa # usort: skip >>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu") >>> img = torch.ones([3, 600, 800]) >>> canvas_size = torch.tensor([448, 448]) >>> target_size = torch.tensor([336, 448]) >>> res = x(img, target_size, canvas_size) >>> res[0].shape torch.Size([4, 3, 224, 224]) >>> res[1] tensor([2, 2]) >>> ``` Reviewed By: larryliu0820 Differential Revision: D62651605 Pulled By: lucylq
Summary: - Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch - Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists. Note, include these PyTorch changes for AOTI export: pytorch/pytorch#135933 Test Plan: ``` >>> import torch >>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache # noqa # usort: skip >>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu") >>> img = torch.ones([3, 600, 800]) >>> canvas_size = torch.tensor([448, 448]) >>> target_size = torch.tensor([336, 448]) >>> res = x(img, target_size, canvas_size) >>> res[0].shape torch.Size([4, 3, 224, 224]) >>> res[1] tensor([2, 2]) >>> ``` Reviewed By: larryliu0820 Differential Revision: D62651605 Pulled By: lucylq
Summary: - Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch - Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists. Note, include these PyTorch changes for AOTI export: pytorch/pytorch#135933 Pull Request resolved: #5350 Test Plan: ``` >>> import torch >>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache # noqa # usort: skip >>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu") >>> img = torch.ones([3, 600, 800]) >>> canvas_size = torch.tensor([448, 448]) >>> target_size = torch.tensor([336, 448]) >>> res = x(img, target_size, canvas_size) >>> res[0].shape torch.Size([4, 3, 224, 224]) >>> res[1] tensor([2, 2]) >>> ``` Differential Revision: D62651605 Reviewed By: larryliu0820 Pulled By: lucylq
Summary: - Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch - Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists. Note, include these PyTorch changes for AOTI export: pytorch/pytorch#135933 Test Plan: ``` >>> import torch >>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache # noqa # usort: skip >>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu") >>> img = torch.ones([3, 600, 800]) >>> canvas_size = torch.tensor([448, 448]) >>> target_size = torch.tensor([336, 448]) >>> res = x(img, target_size, canvas_size) >>> res[0].shape torch.Size([4, 3, 224, 224]) >>> res[1] tensor([2, 2]) >>> ``` Reviewed By: larryliu0820 Differential Revision: D62651605 Pulled By: lucylq
Summary: - Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch - Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists. Note, include these PyTorch changes for AOTI export: pytorch/pytorch#135933 Pull Request resolved: #5350 Test Plan: ``` >>> import torch >>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache # noqa # usort: skip >>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu") >>> img = torch.ones([3, 600, 800]) >>> canvas_size = torch.tensor([448, 448]) >>> target_size = torch.tensor([336, 448]) >>> res = x(img, target_size, canvas_size) >>> res[0].shape torch.Size([4, 3, 224, 224]) >>> res[1] tensor([2, 2]) >>> ``` Reviewed By: larryliu0820 Differential Revision: D62651605 Pulled By: lucylq
Summary: - Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch - Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists. Note, include these PyTorch changes for AOTI export: pytorch/pytorch#135933 Test Plan: ``` >>> import torch >>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache # noqa # usort: skip >>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu") >>> img = torch.ones([3, 600, 800]) >>> canvas_size = torch.tensor([448, 448]) >>> target_size = torch.tensor([336, 448]) >>> res = x(img, target_size, canvas_size) >>> res[0].shape torch.Size([4, 3, 224, 224]) >>> res[1] tensor([2, 2]) >>> ``` Reviewed By: larryliu0820 Differential Revision: D62651605 Pulled By: lucylq
Summary: - Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch - Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists. Note, include these PyTorch changes for AOTI export: pytorch/pytorch#135933 Pull Request resolved: #5350 Test Plan: ``` >>> import torch >>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache # noqa # usort: skip >>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu") >>> img = torch.ones([3, 600, 800]) >>> canvas_size = torch.tensor([448, 448]) >>> target_size = torch.tensor([336, 448]) >>> res = x(img, target_size, canvas_size) >>> res[0].shape torch.Size([4, 3, 224, 224]) >>> res[1] tensor([2, 2]) >>> ``` Reviewed By: larryliu0820 Differential Revision: D62651605 Pulled By: lucylq fbshipit-source-id: bdf5b46033ebbd73d10307ab58219743a73fd6fd
Summary: - Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch - Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists. Note, include these PyTorch changes for AOTI export: pytorch/pytorch#135933 Pull Request resolved: #5350 Test Plan: ``` >>> import torch >>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache # noqa # usort: skip >>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu") >>> img = torch.ones([3, 600, 800]) >>> canvas_size = torch.tensor([448, 448]) >>> target_size = torch.tensor([336, 448]) >>> res = x(img, target_size, canvas_size) >>> res[0].shape torch.Size([4, 3, 224, 224]) >>> res[1] tensor([2, 2]) >>> ``` Reviewed By: larryliu0820 Differential Revision: D62651605 Pulled By: lucylq fbshipit-source-id: bdf5b46033ebbd73d10307ab58219743a73fd6fd
…s" in AOTInductor (pytorch#135933) Internal xref: https://fb.workplace.com/groups/1075192433118967/permalink/1501860707118802/ Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: pytorch#135933 Approved by: https://github.com/angelayi
Stack from ghstack (oldest at bottom):
Internal xref:
https://fb.workplace.com/groups/1075192433118967/permalink/1501860707118802/
Signed-off-by: Edward Z. Yang ezyang@meta.com
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang