Merge Sort is a perfect choice for parallel sorting – it doesn’t require any synchronization. While counting sort requires some memory atomic operations – CUDA 1.0 doesn’t support Global Memory atomic operations – CUDA 1.1 or above supports ATOM while all nVidia Graphics Cards in our lab are CUDA 1.0… Also, I have to implement Merge Sort in the iterative way since CUDA doesn't support function recursion. What’s more, Merge Sort is an in-place sort which can help save a lot unnecessary memory space on GPU.
I would say it works well – but the current CUDA driver will kill the driver instance if the kernel function has been executing too long – only a few seconds. So it is the OS (windows) mechanism (watch dog) that forms the main obstacle. My professor said the latest CUDA driver will probably solve it.
Anyway, the performance comparison results are quite interesting. For small amount of testing number sizes, like 10k, 20k, CPU is much faster than GPU. But for 200k, 300k or even 1M, GPU will be much faster than CPU. – Parallelism is not silver bullet, isn’t it…

本文探讨了使用GPU进行大规模数据并行排序的方法。由于实验室的GPU仅支持CUDA 1.0,因此选择了不需要同步操作的归并排序算法,并采用迭代而非递归方式实现。实验结果显示,在处理大量数据(如200k、300k或1M条记录)时,GPU上的排序速度显著快于CPU。
517





