CARVIEW |
Navigation Menu
-
Notifications
You must be signed in to change notification settings - Fork 24.7k
Unify cache disable and cache bypass paths #141685
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
đź”— Helpful Linksđź§Ş See artifacts and rendered test results at hud.pytorch.org/pr/141685
Note: Links to docs will display an error until the docs builds have been completed. âś… You can merge normally! (2 Unrelated Failures)As of commit 5dcfc5a with merge base 0f261e8 ( UNSTABLE - The following jobs failed but were likely due to flakiness present on trunk and has been marked as unstable:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm having some trouble following the intention of this one. What does it really mean here to "unify" bypass and miss, if you still have two separate branches for them? I.e what's the end goal?
input._is_inductor_static = True # type: ignore[attr-defined] | ||
# TODO: This is a hack purely to get some info to extract_tensor_metadata_for_cache_key, | ||
# figure out how to not have to modify example inputs | ||
for i, input in enumerate(example_inputs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How come we can unconditionally do this now instead of before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is a behavior change, but I guessed it would be harmless because I audited the use sites and there are no "negative" usages (e.g., explicitly tests that this field is not defined). So I am guessing it is harmless to always have this populated. CI seems to agree.
|
||
local = config.fx_graph_cache | ||
remote = fx_graph_remote_cache | ||
# TODO: Remove this short circuit once types are unified here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok wait, isn't the entire point of this refactor to unify this type? Or is there another PR incoming to do that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This gets eliminated in #141695
# In that case, we don't need to run all post compilation steps, we just need | ||
# to return the string directly. | ||
return compiled_graph | ||
compiled_graph.post_compile2(example_inputs, cudagraphs, gm) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Specifically, this else branch got deleted
@pytorchbot merge -f "unrelated failures" |
still needs review |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: PR #141685 has not been reviewed yet |
Stacked on top of #141685 Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: #141688 Approved by: https://github.com/Skylion007, https://github.com/jansel ghstack dependencies: #141681, #141683, #141685
I was constantly annoyed at the fact that we had a separate else branch for when cache was disabled which was distinct from when cache was bypassed. This diff gets rid of the disabled cache branch, so we use the same logic for bypass/disable. I actually think this change probably didn't actually matter much for the POC but I think it's cleaner. Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: pytorch#141685 Approved by: https://github.com/aorenste ghstack dependencies: pytorch#141681, pytorch#141683
Stacked on top of pytorch#141685 Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: pytorch#141688 Approved by: https://github.com/Skylion007, https://github.com/jansel ghstack dependencies: pytorch#141681, pytorch#141683, pytorch#141685
Stacked on pytorch#141688 Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: pytorch#141689 Approved by: https://github.com/jansel ghstack dependencies: pytorch#141681, pytorch#141683, pytorch#141685, pytorch#141688
Stacked on pytorch#141689 Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: pytorch#141691 Approved by: https://github.com/jansel ghstack dependencies: pytorch#141681, pytorch#141683, pytorch#141685, pytorch#141688, pytorch#141689
Stacked on pytorch#141691 Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: pytorch#141695 Approved by: https://github.com/aorenste ghstack dependencies: pytorch#141681, pytorch#141683, pytorch#141685, pytorch#141688, pytorch#141689, pytorch#141691
Signed-off-by: Edward Z. Yang <ezyang@meta.com> ghstack-source-id: 2b93a01 Pull Request resolved: pytorch/pytorch#141685
Stack from ghstack (oldest at bottom):
I was constantly annoyed at the fact that we had a separate else branch for when cache was disabled which was distinct from when cache was bypassed. This diff gets rid of the disabled cache branch, so we use the same logic for bypass/disable. I actually think this change probably didn't actually matter much for the POC but I think it's cleaner.
Signed-off-by: Edward Z. Yang ezyang@meta.com
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov