矩阵乘法的CUDA示例——使用共享内存、流、事件

本文展示了一个CUDA程序,用于实现矩阵乘法,利用了共享内存来提高效率,通过分块策略和模板函数处理不同大小的矩阵。代码中使用了__shared__关键字声明共享内存数组,并使用#pragmaunroll展开循环以减少指令数。此外,还利用流进行异步操作,如cudaMemcpyAsync和MatrixMulCUDA的调用,以实现任务并行。CMake配置文件用于设置编译选项和生成可执行文件。在测试中,程序计算了性能并验证了结果的正确性。

贺志国

计算矩阵C = A * B,使用自己写的核函数,主要熟悉共享内存、流、事件的使用方法。使用分块策略的矩阵乘法原理如下图所示:
matrix-multiplication-with-shared-memory

示例文件matrix_multiply.cu代码如下:

/**
 * Matrix multiplication: C = A * B.
 * Host code.
 *
 * This sample implements matrix multiplication which makes use of shared memory
 * to ensure data reuse, the matrix multiplication is done using tiling
 * approach.
 *
 */

#include <iostream>
#include <vector>

// CUDA runtime
#include <cuda_runtime.h>

/**
 * Matrix multiplication (CUDA Kernel) on the device: C = A * B
 * A_width is A's width and B_width is B's width
 */
template <int BLOCK_SIZE>
__global__ void MatrixMulCUDA(float *C, float *A, float *B, int A_width,
                              int B_width) {
   
   
  // Block index
  int bx = blockIdx.x;
  int by = blockIdx.y;

  // Thread index
  int tx = threadIdx.x;
  int ty = threadIdx.y;

  // Index of the first sub-matrix of A processed by the block
  int A_begin = A_width * BLOCK_SIZE * by;

  // Index of the last sub-matrix of A processed by the block
  int A_end = A_begin + A_width - 1;

  // Step size used to iterate through the sub-matrices of A
  int A_step = BLOCK_SIZE;

  // Index of the first sub-matrix of B processed by the block
  int B_begin = BLOCK_SIZE * bx;

  // Step size used to iterate through the sub-matrices of B
  int B_step = BLOCK_SIZE * B_width;

  // C_sub is used to store the element of the block sub-matrix
  // that is computed by the thread
  float C_sub = 0;

  // Loop over all the sub-matrices of A and B
  // required to compute the block sub-matrix
  for (int a = A_begin, b = B_begin; a <= A_end; a += A_step, b += B_step) {
   
   
    // Declaration of the shared memory array A_sub used to
    // store the sub-matrix of A
    __shared__ float A_sub[BLOCK_SIZE][BLOCK_SIZE];

    // Declaration of the shared memory array B_sub used to
    // store the sub-matrix of B
    __shared__ float B_sub[BLOCK_SIZE][BLOCK_SIZE];

    // Load the matrices from device memory
    // to shared memory; each thread loads
    // one element of each matrix
    A_sub[ty][tx] = A[a + A_width * ty + tx];
    B_sub[ty][tx] = B[b + B_width * ty + tx];

    // Synchronize to make sure the matrices are loaded
    __syncthreads();

    // Multiply the two matrices together;
    // each thread computes one element
    // of the block sub-matrix
#pragma unroll
    for (int k = 0; k < BLOCK_SIZE; ++k) {
   
   
      C_sub += A_sub[ty][k] * B_sub[k][tx];
    }

    // Synchronize to make sure that the preceding
    // computation is done before loading two new
    // sub-matrices of A and B in the next iteration
    __syncthreads();
  }

  // Write the block sub-matrix to device memory;
  // each thread writes one element
  int c = B_width * BLOCK_SIZE * by + BLOCK_SIZE * bx;
  C[c + B_width * ty + tx] = C_sub;
}

/**
 * Run a simple test of matrix multiplication using CUDA
 */
int MatrixMultiply(int block_size, const dim3 &A_dim,
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值