You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
To compare adjacent values, the same memory is loaded twice with an element shift.
It is possible to reuse the previous vector part, and mix it with the current, to save one load, but have some extra instructions to mix values, and a loop-carried dependency. On SSE path it is possible with _mm_alignr_epi8 (except for 8-bit elements). For AVX it would be way more complex due to AVX lanes.
Benchmarking shows that double load is faster than any reuse attempt. To some extent such a result overlaps with #4958
I've discovered that something is missing.
But I think it can wait for a follow up PR.
Unlike remove algorithm, this one doesn't have the search for the first duplicate before the main vectorization loop. The scalar implementations in headers have that part, the vectorized one currently doesn't.
For performance it is clearly a missed opportunity. Though the vectorization improvement should be bigger than the negative effect of extra writes.
For purposes of determining the existence of data races, algorithms shall not modify objects referenced through an iterator argument unless the specification requires such modification.
However as the writes write equal integer values, this is not observable for concurrent reads, even if container violates alignment requirements and the write is not atomic.
The only thing where extra writes can be observable is running this algorithm on a read-only data without adjacent duplicates. But this is a very silly use case.
It is easily fixable with adjacent_find, vectorized in another PR here.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Not really unique, modelled on #4987
β¬ Double load
To compare adjacent values, the same memory is loaded twice with an element shift.
It is possible to reuse the previous vector part, and mix it with the current, to save one load, but have some extra instructions to mix values, and a loop-carried dependency. On SSE path it is possible with
_mm_alignr_epi8
(except for 8-bit elements). For AVX it would be way more complex due to AVX lanes.Benchmarking shows that double load is faster than any reuse attempt. To some extent such a result overlaps with #4958
β±οΈ Benchmark results