You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The purpose of this PR is to optimize the Sweep pass for the FullGC pass that occurs after any change to the graph that causes re-execution. Specifically this optimization does the rolling:
introduces a specific Call method to the Executive (CallDispose) tailored specifically for dispatching Dispose calls vs allowing these calls from the GC to pass through the generic Callr. Additionally it introduces a specific Dispatch method to the Callsite ('DispatchDispose) also tailored to Dispatch Dispose calls vs Dispatch. In both case the normal overhead of the CallrandDispatchis much higher and .NET memory/GC intensive than the actualDispose` calls.
Adds caching based on object type where possible as the GC is often collecting similar types repeatedly
In testing a graph that had ~150000 items to collect in the sweep pass the time was reduced from 2s to .5s and the .NET memory allocation from 800mb to 16mb.
Todo is to validate with testing include DS object dispose.
The reason will be displayed to describe this comment to others. Learn more.
This was not necessary with the new route and would be a memory leak. Also FYI in Dynamo today it does seem to have a memory leak regardless. We always add 3 items but only ever Pop two.
The reason will be displayed to describe this comment to others. Learn more.
Many thanks @aparajit-pratap for your insights. I have been trying to break this in various ways, unsuccessfully so far. It works well. In many cases where it currently takes 10-15 times longer to dispose objects than it takes to create them it now takes about the same time to create and dispose. I am also seeing about 20% less memory being used.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Purpose
https://jira.autodesk.com/browse/DYN-4231
The purpose of this PR is to optimize the Sweep pass for the FullGC pass that occurs after any change to the graph that causes re-execution. Specifically this optimization does the rolling:
CallDispose
) tailored specifically for dispatching Dispose calls vs allowing these calls from the GC to pass through the genericCallr
. Additionally it introduces a specific Dispatch method to the Callsite ('DispatchDispose) also tailored to Dispatch Dispose calls vs
Dispatch. In both case the normal overhead of the
Callrand
Dispatchis much higher and .NET memory/GC intensive than the actual
Dispose` calls.In testing a graph that had ~150000 items to collect in the sweep pass the time was reduced from 2s to .5s and the .NET memory allocation from 800mb to 16mb.
Todo is to validate with testing include DS object dispose.
Declarations
Check these if you believe they are true
*.resx
filesReviewers
TBD
FYIs
@jasonstratton @QilongTang