贺志国
计算矩阵C = A * B,使用自己写的核函数,主要熟悉共享内存、流、事件的使用方法。使用分块策略的矩阵乘法原理如下图所示:

示例文件matrix_multiply.cu代码如下:
/**
* Matrix multiplication: C = A * B.
* Host code.
*
* This sample implements matrix multiplication which makes use of shared memory
* to ensure data reuse, the matrix multiplication is done using tiling
* approach.
*
*/
#include <iostream>
#include <vector>
// CUDA runtime
#include <cuda_runtime.h>
/**
* Matrix multiplication (CUDA Kernel) on the device: C = A * B
* A_width is A's width and B_width is B's width
*/
template <int BLOCK_SIZE>
__global__ void MatrixMulCUDA(float *C, float *A, float *B, int A_width,
int B_width) {
// Block index
int bx = blockIdx.x;
int by = blockIdx.y;
// Thread index
int tx = threadIdx.x;
int ty = threadIdx.y;
// Index of the first sub-matrix of A processed by the block
int A_begin = A_width * BLOCK_SIZE * by;
// Index of the last sub-matrix of A processed by the block
int A_end = A_begin + A_width - 1;
// Step size used to iterate through the sub-matrices of A
int A_step = BLOCK_SIZE;
// Index of the first sub-matrix of B processed by the block
int B_begin = BLOCK_SIZE * bx;
// Step size used to iterate through the sub-matrices of B
int B_step = BLOCK_SIZE * B_width;
// C_sub is used to store the element of the block sub-matrix
// that is computed by the thread
float C_sub = 0;
// Loop over all the sub-matrices of A and B
// required to compute the block sub-matrix
for (int a = A_begin, b = B_begin; a <= A_end; a += A_step, b += B_step) {
// Declaration of the shared memory array A_sub used to
// store the sub-matrix of A
__shared__ float A_sub[BLOCK_SIZE][BLOCK_SIZE];
// Declaration of the shared memory array B_sub used to
// store the sub-matrix of B
__shared__ float B_sub[BLOCK_SIZE][BLOCK_SIZE];
// Load the matrices from device memory
// to shared memory; each thread loads
// one element of each matrix
A_sub[ty][tx] = A[a + A_width * ty + tx];
B_sub[ty][tx] = B[b + B_width * ty + tx];
// Synchronize to make sure the matrices are loaded
__syncthreads();
// Multiply the two matrices together;
// each thread computes one element
// of the block sub-matrix
#pragma unroll
for (int k = 0; k < BLOCK_SIZE; ++k) {
C_sub += A_sub[ty][k] * B_sub[k][tx];
}
// Synchronize to make sure that the preceding
// computation is done before loading two new
// sub-matrices of A and B in the next iteration
__syncthreads();
}
// Write the block sub-matrix to device memory;
// each thread writes one element
int c = B_width * BLOCK_SIZE * by + BLOCK_SIZE * bx;
C[c + B_width * ty + tx] = C_sub;
}
/**
* Run a simple test of matrix multiplication using CUDA
*/
int MatrixMultiply(int block_size, const dim3 &A_dim,

本文展示了一个CUDA程序,用于实现矩阵乘法,利用了共享内存来提高效率,通过分块策略和模板函数处理不同大小的矩阵。代码中使用了__shared__关键字声明共享内存数组,并使用#pragmaunroll展开循环以减少指令数。此外,还利用流进行异步操作,如cudaMemcpyAsync和MatrixMulCUDA的调用,以实现任务并行。CMake配置文件用于设置编译选项和生成可执行文件。在测试中,程序计算了性能并验证了结果的正确性。
最低0.47元/天 解锁文章
256

被折叠的 条评论
为什么被折叠?



