You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
VSO_0000000_vector_algorithms tends to take longer with more algorithms vectorized.
Think we need to partition it.
Starting with lex compare family. They are taking about 30% of total run time,
Also the lex compare direction looks complete, and unlikely to have overlaps with other algorithms.
With the variant_msvc tests, I permanently split them into a separate file because they're developed totally separately from the LLVM-derived tests. With the vector algorithms tests, I think keeping them in a single file might be better, because that way we can easily re-group how much we run each time.
Would it be better to keep everything in a single file, but controlled via macros?
Is there an easy way to select a group using some extra parameter python tests\utils\stl-lit\stl-lit.py ..\..\tests\std\tests\VSO_0000000_vector_algorithms -v command?
Also, partitioning algorithms by algorithm type seems more natural than partitioning views by iterator type.
Algorithms implementations are unrelated (not going to split where they are related).
We could use tags (like with ASAN), but since the algorithms are unrelated and there's no commonality between the test support machinery, having a separate test does make sense. Let's keep this as-is, thanks!
there's no commonality between the test support machinery
There is no new commonality extracted during this separation, as all the commonality, which is mostly the randomness initialization, was previously extracted in #4734 into /tests/std/include/test_vector_algorithms_support.hpp to support separate floating minmax testing.
This specific commonality, I think, is not an indication that these tests should be together.
Instead, it seems to be something potentially reusable in more separate tests. <charconv> test mentioned in #933 is a candidate:
Actually I'm floating the separation idea with this PR.
When thinking of search_n I concluded that comprehensive enough test would have a cubic run time: O(H*N*F) where H is haystack length, N is needle length and F is the frequency of a match, will gradually increment all three. And it has no relationship to anything else. So I think it deserves the whole separate .cpp
When thinking of search_n I concluded that comprehensive enough test would have a cubic run time
If a comprehensive test would have a long run time, then I'd recommend having an off-by-default "EXHAUSTIVE" mode that allows the whole space to be manually run, but having automated runs select a randomized subset of cases.
Expensive tests are problematic both from a machine resourcing and timeout perspective, so we need to be careful.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
VSO_0000000_vector_algorithms
tends to take longer with more algorithms vectorized.Think we need to partition it.
Starting with lex compare family. They are taking about 30% of total run time,
Also the lex compare direction looks complete, and unlikely to have overlaps with other algorithms.