Edit: this is unfinished because it won't be very useful. For small arrays the time to do two block sorts is likely allready as expensive as merge sorting all the keys. And the global communication loop would be really badly latency bound (maybe even on AMD using GDS). If there is anything here which is useful, it is that if the app already can bin into buckets which can be fully sorted by one block/bucket, then a single global compaction run would be quite fast...
Here is an unfinished very rough idea for fast parallel sorting of semi-temporally coherent smaller array which can fit into shared memory on the GPU,
(1.) Divide the array into blocks, where a block is the optimal size for sorting in shared memory on the GPU. For something like a 560ti, there might be just 32 blocks on the GPU.
(2.) Keep track of the prior sorted array's "splitting keys" these are the keys at the end of each the block.
(3.) Each block loads and sorts into shared memory it's region of updated keys which are no longer in sorted order.
(4.) Starting with the initial guess of "splitting keys" each block counts the keys in each block interval (based on the splitting keys) then atomic adds to a global sum. Note since each block is sorted, it takes just a simple parallel binary search (likely 11 steps) per splitting key and a subtraction to count. For 32-blocks, this would only be one warp on NVIDIA hardware, so maybe it would be better to test multiple sets of splitting key configurations in parallel?
(5.) After the block sums are finished, something adjusts the "splitting keys" guess to converge to correct splitting keys which result in even counts of keys per block. Then step (4.) runs again. Note each block (instead of something globally) could re-compute the "splitting keys" as long as all blocks make the same choice.
(6.) Step (4.) and (5.) repeat until the global block sums are the same. At this point the correct "splitting keys" are known.
(7.) Each block scatters it's keys to the correct destination block. This involves a global scan and global scatter. Note the block counts are already known, and as blocks are already sorted, the scatter is likely fast.
(8.) Blocks reload scattered keys, then do an optimal in-block sort, and store to global memory.
Clearly this idea is missing lots of details, figuring out the global communication is a hard problem. The general idea is to bound global memory access for keys to 4 passes (2 coalesced reads, 1 semi-coalesced scatter, and 1 coalesced write). Optimal in-block sort is likely an in-register sorting network after global load, followed by in-block parallel merge sort in shared memory. This algorithm is bounded to just 2 in-block sorts.
Here is an unfinished very rough idea for fast parallel sorting of semi-temporally coherent smaller array which can fit into shared memory on the GPU,
(1.) Divide the array into blocks, where a block is the optimal size for sorting in shared memory on the GPU. For something like a 560ti, there might be just 32 blocks on the GPU.
(2.) Keep track of the prior sorted array's "splitting keys" these are the keys at the end of each the block.
(3.) Each block loads and sorts into shared memory it's region of updated keys which are no longer in sorted order.
(4.) Starting with the initial guess of "splitting keys" each block counts the keys in each block interval (based on the splitting keys) then atomic adds to a global sum. Note since each block is sorted, it takes just a simple parallel binary search (likely 11 steps) per splitting key and a subtraction to count. For 32-blocks, this would only be one warp on NVIDIA hardware, so maybe it would be better to test multiple sets of splitting key configurations in parallel?
(5.) After the block sums are finished, something adjusts the "splitting keys" guess to converge to correct splitting keys which result in even counts of keys per block. Then step (4.) runs again. Note each block (instead of something globally) could re-compute the "splitting keys" as long as all blocks make the same choice.
(6.) Step (4.) and (5.) repeat until the global block sums are the same. At this point the correct "splitting keys" are known.
(7.) Each block scatters it's keys to the correct destination block. This involves a global scan and global scatter. Note the block counts are already known, and as blocks are already sorted, the scatter is likely fast.
(8.) Blocks reload scattered keys, then do an optimal in-block sort, and store to global memory.
Clearly this idea is missing lots of details, figuring out the global communication is a hard problem. The general idea is to bound global memory access for keys to 4 passes (2 coalesced reads, 1 semi-coalesced scatter, and 1 coalesced write). Optimal in-block sort is likely an in-register sorting network after global load, followed by in-block parallel merge sort in shared memory. This algorithm is bounded to just 2 in-block sorts.