You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
In-place remove algorithm that uses bit mask of vector comparisons as shuffle index to remove certain elements.
The destination is advanced by a size taken from a lookup table too, although popcnt could have been used.
The details vary on depending on element size:
The tables are selected to be of up to 256 entries, to process up to 8 elements at once. Bigger tables don't fit top level caches well. Some tables are smaller, when less than 8 elements can fit the vector.
8 and 16 bit variants use SSE only, as 8 elements of 8 or 16 bits fit SSE register, and more elements will take bigger table, Also there's no cross-lane shuffle within SSE4.2 / AVX2 that works with elements of such sizes. They use the famous pshufb / _mm_shuffle_epi8 to remove elements.
32 and 64 bit variants use AVX2, and 64 bit variant uses only a table of 16 entries, as there are up to 4 elements only. They use vpermd / _mm256_permutevar8x32_epi32 to remove elements, which is cross lane. SEE fallbacks are used with smaller tables, still surprisingly more efficient than scalar.
Need contiguous mask of bits, single bit per element, not few bits for the same comparison. 8-bit uses pmovmskb. 32 and 64 bit use vmovmskps / vmovmskpd, though they are for floating types, they fit well, and avoid the need of cross-lane swizzling to compress the mask. For 16-bit, packsswb is used, although pshufb could have been used as well.
๐ Find first!
Before even starting, find is performed to find the first mismatch element. This is done for the correctness, and also there are performance reasons why it is good:
Correctness. [algorithms.requirements]/3 states: For purposes of determining the existence of data races, algorithms shall not modify objects referenced through an iterator argument unless the specification requires such modification. Whereas [alg.remove] is vague on how the algorithm should work, I think we should only write to elements that has to be written to
Vectorization, We can have full AVX2 vector size as the step always, not only for 32 and 64 bit elements
Memory bandwidth. The vectorized algorithm might be memory bound, saving writes may make it faster
Number of operations. Fewer ops to just test the content
The existing find implementation is called. Hypothetically I could implement it inline and save some instructions in some cases, but such optimization has too negligible effect on performance, while increasing complexity noticeably. Though this might be revised for future remove_copy if that and this would share the implementation.
โ ๏ธ Correctness doubt - superfluous writes
The algorithm removes elements from the source vector (of 8 or less elements) by a shuffle operation, so that non-removed elements are placed contiguously in that vector. Then it writes the whole vector to the destination, and advances the destination pointer to the size of non-removed elements.
As a result:
In the remaining range some elements are overwritten with some values, before they are overwritten with expected values
In the removed range, some amount of elements (up to 8 of them) are overwritten with values of other elements, and never restored.
I have no doubts that overwriting elements in the resulting range to to some intermediate values before setting them to the expected values is correct. The write and the data race (in abstract machine terms) exist anyway, so extra write is not observable.
I have concerns regarding damaging the removed range. Changing these values is observable.
I'd appeal to that elements in removed range stay in valid-but-uspecified state, although I understand that the purpose of standard saying that is to enable moving of non-trivially-copyables, but not to do what I did.
Note that:
It is possible to avoid to do any of superfluous write, but it will have some cost
The cost of avoiding superfluous writes is small for 32 and 64 bit elements, and larger for 8 and 16 bit elements
When//if vectorizing remove_copy in a similar way have to avoid superfluous writes anyway
๐๏ธ Memory usage
Unlike most other vectorization algorithms, this one uses large lookup tables. 8 and 32 bit variants use 2 KiB table, 16 bit variant uses 4 KiB table.
This has different performance characteristics, compared to pure-computational optimizations. In particular, it tends to behave worse in some programs that don't fit cache well on their critical path. This doesn't apply to benchmarks, but unfortunately often applies to realistic programs, especially the ones that are not written with having performance in mind.
I believe that the optimization is still good or at least not bad most of the time where it is needed.
Modern AMD data would be interesting.
To avoid tricky swizzling, I added mixing integers and floats in 39974d1. The overall results are better for me, but this mixing seems to have more penalty on AMD than on Intel
remove_copy if it succeeds going to be very similar, but would always use AVX2 mask, and so available only for 32 and 64 bit elements. I can try this within the same PR, although it seems big already.
I'm now seeing multiple solutions how to do remove_copy for 8 and 16 bit elements
Thanks for the detailed writeup! I have no correctness concerns here - the Standard has a note that the garbage values are valid-but-unspecified, so we're fully within our rights to leave totally random values in there, even if the range originally contained (for example) only 10 and 20.
"If you want partition, you know where to find it."
โฆund a value to remove.
Move the vectorized codepath below the existing call to `_RANGES _Find_unchecked`.
Drop `_Could_compare_equal_to_value_type`, as `_Find_unchecked` has handled that and found a value.
Finally, `__std_remove_N` doesn't need to start with `__std_find_trivial_N`.
Thanks! ๐ป Pushed changes as usual, the most significant making the <algorithm>/<xmemory> layer responsible for performing the initial find. Good results on my 5950X:
I've pushed a merge with main to resolve merge conflicts in search.cpp (structured as one commit to replicate the stupid thing I did in MSVC, followed by a proper fix, so I can mirror the latter).
Now, this directly constructs std::string from the std::string_views src_haystack and src_needle, and constructs std::vector from their .begin() and .end(). These are stylistic improvements to reduce verbosity.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
๐ The algorithm
In-place
remove
algorithm that uses bit mask of vector comparisons as shuffle index to remove certain elements.The destination is advanced by a size taken from a lookup table too, although
popcnt
could have been used.The details vary on depending on element size:
pshufb
/_mm_shuffle_epi8
to remove elements.vpermd
/_mm256_permutevar8x32_epi32
to remove elements, which is cross lane. SEE fallbacks are used with smaller tables, still surprisingly more efficient than scalar.pmovmskb
. 32 and 64 bit usevmovmskps
/vmovmskpd
, though they are for floating types, they fit well, and avoid the need of cross-lane swizzling to compress the mask. For 16-bit,packsswb
is used, althoughpshufb
could have been used as well.๐ Find first!
Before even starting, find is performed to find the first mismatch element. This is done for the correctness, and also there are performance reasons why it is good:
The existing
find
implementation is called. Hypothetically I could implement it inline and save some instructions in some cases, but such optimization has too negligible effect on performance, while increasing complexity noticeably. Though this might be revised for futureremove_copy
if that and this would share the implementation.The algorithm removes elements from the source vector (of 8 or less elements) by a shuffle operation, so that non-removed elements are placed contiguously in that vector. Then it writes the whole vector to the destination, and advances the destination pointer to the size of non-removed elements.
As a result:
I have no doubts that overwriting elements in the resulting range to to some intermediate values before setting them to the expected values is correct. The write and the data race (in abstract machine terms) exist anyway, so extra write is not observable.
I have concerns regarding damaging the removed range. Changing these values is observable.
I'd appeal to that elements in removed range stay in valid-but-uspecified state, although I understand that the purpose of standard saying that is to enable moving of non-trivially-copyables, but not to do what I did.
Note that:
remove_copy
in a similar way have to avoid superfluous writes anyway๐๏ธ Memory usage
Unlike most other vectorization algorithms, this one uses large lookup tables. 8 and 32 bit variants use 2 KiB table, 16 bit variant uses 4 KiB table.
This has different performance characteristics, compared to pure-computational optimizations. In particular, it tends to behave worse in some programs that don't fit cache well on their critical path. This doesn't apply to benchmarks, but unfortunately often applies to realistic programs, especially the ones that are not written with having performance in mind.
I believe that the optimization is still good or at least not bad most of the time where it is needed.
โฑ๏ธ Benchmark results