
CUDA
闭门即深山
SLAMer
展开
-
NVIDIA CUDA 学习 (5) Texture Memory
Texture Memory纹理内存热传导模型 纹理内存 热传导模型原创 2020-09-09 21:52:54 · 907 阅读 · 0 评论 -
NVIDIA CUDA 学习 (4) Constant Memory and Events
Constant Memory and EventsConstant Memory Constant Memory原创 2020-09-09 14:46:29 · 180 阅读 · 0 评论 -
NVIDIA CUDA 学习 (3) Thread Cooperation
Thread Cooperation设置并行块设置并行线程混合设置长向量相加处理图片共享显存和同步 设置并行块 add<<<N,1>>>( dev_a, dev_b, dev_c ); //N blocks x 1 thread/block = N parallel threads 这句话里面的1,就是the number of threads per block we want the CUDA runtime to create on our behalf,每个块的原创 2020-09-08 11:12:01 · 214 阅读 · 0 评论 -
NVIDIA CUDA 学习 (2) CUDA Parallel Programming
CUDA Parallel ProgrammingCPU并发编程GPU并发编程尖括号的含义块索引 CPU并发编程 #include "../common/book.h" #define N 10 void add( int *a, int *b, int *c ) { int tid = 0; // this is CPU zero, so we start at zero while (tid < N) { c[tid] = a[tid] + b[tid]; tid += 1; //原创 2020-09-06 19:20:15 · 322 阅读 · 0 评论 -
NVIDIA CUDA 学习 (1) CUDA By Example
CUDA By ExampleHost和DeviceCUDA C编程分配空间函数定义和执行使用规范 Host和Device host就是你的CPU,Device就是你的GPU。 CUDA C编程 #include <iostream> #include "book.h" __global__ void add( int a, int b, int *c ) { *c = a + b; } int main( void ) { int c; int *dev_c; HANDLE_ERROR原创 2020-09-05 22:37:46 · 494 阅读 · 0 评论