并行计算
MPI(Message-Passing Interface)
- 有fortran或C绑定
- C++ 语言绑定在MPIv3.0被移除(仍然可以在c++里调用c的库mpi.h)
编程模型 - 内存分布式
- 执行单元是进程
- 传递消息需要耗时
POD plain-old data, C风格的数据类型除了指针,避免使用C++继承的虚函数。
可以使用container来包含多个POD,比如std::string,std::array,std::vector
安装mpi
例子 helloworld.c
#include<stdio.h>
#include<mpi.h>
int main(int argc,char*argv[]){
int ierr, procid, numprocs;
ierr = MPI_Init(&argc, &argv);
ierr = MPI_Comm_rank(MPI_COMM_WORLD, &procid);
ierr = MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
printf("Hello world! I am process %d out of %d!\n", procid,numprocs);
ierr = MPI_Finalize();
}
helloworld.cpp
#include <iostream>
#include <mpi.h>
using namespace std;
int main(int argc,char*argv[]){
int ierr, procid, numprocs;
ierr = MPI_Init(&argc, &argv);
ierr = MPI_Comm_rank(MPI_COMM_WORLD, &procid);
ierr = MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
cout << "Hello world! I am process " << procid << " out of " <<numprocs << "!\n";
ierr = MPI_Finalize();
}
下面是c++版本的编译和运行示例
mpic++ -o prog.exe helloworld.cpp
mpirun -n 4 ./prog.exe
MPI_Send(buf,count,type,dest,tag,comm) 发送消息
MPI_Recv()buf,count,type,src,tag,comm,status 接收消息
例子 sendrecv.c
#include<stdio.h>
#include<mpi.h>
int main(int argc,char* argv[])
{
int ierr, procid, numprocs;
ierr = MPI_Init(&argc, &argv);
ierr = MPI_Comm_rank(MPI_COMM_WORLD, &procid);
ierr = MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
if(numprocs % 2 != 0){
printf("ERROR:Number of processes is not even!\n");
return MPI_Abort(MPI_COMM_WORLD, 1);
}
if(procid % 2 == 0){
// even procid i will send the number 3.14+i to procid i+1...
double val = 3.14+procid;
MPI_Send(&val, 1, MPI_DOUBLE, procid+1, 0, MPI_COMM_WORLD);
printf("ProcID %d sent value %lf to ProcID %d.\n", procid,val, procid+1);
}
else{
// odd procid i will wait to receive a value from any procid
double val;
MPI_Status status;
ierr = MPI_Recv(&val, 1, MPI_DOUBLE, MPI_ANY_SOURCE, 0,MPI_COMM_WORLD, &status);
if(ierr == MPI_SUCCESS)
printf("ProcID %d received value %lf.\n", procid, val);
else
printf("ProcID %d did not successfully receive a value!\n",procid);
}
ierr = MPI_Finalize();
}
编译和运行命令如下
mpicc -o sr.exe sendrecv.c
mpirun -np 4 ./sr.exe
- MPI for python
基于MPI-2的c++绑定
其中有一个计算pi的样例,运行命令如下,由于有多块网卡,所以需要指定某一块网卡,否则报错
Open MPI detected an inbound MPI TCP connection request from a peer
that appears to be part of this MPI job
mpirun -np 5 -mca btl_tcp_if_include enp65s0f0 -x NCCL_SOCKET_IFNAME=enp65s0f0 python3 cpi_p.py
OpenCL
- 可以在CPU,GPU上计算
- 可以达到很高的表现但是OpenCL C 很难编程
OpenAcc
- GPU
CUDA
- GPU
OpenMP
- CPU
- C,C++,Fortran
最新的文档需要去github上下载,使用make即可编译出对应的pdf
openmp只需要加入一些对应的声明就可以把程序变成并行的程序,比如文档给的第一个例子
#include <omp.h>
#include <iostream>
using namespace std;
void simple(int n, float *a, float *b)
{
int i;
#pragma omp parallel for
for (i=1; i<n; i++) /* i is private by default */
b[i] = (a[i] + a[i-1]) / 2.0;
}
int main(){
int n =10;
float* a= new float[n];
float* b = new float[n];
for(int i=0;i<n;i++){
a[i] = i;
}
simple(n, a, b);
for(int i=0;i<n;i++){
cout<<b[i];
}
return 0;
}
新建一个Makefile文件
ploop: ploop.c
g++ -fopenmp -o $@.o -c $^
g++ -fopenmp -o $@ $@.o
make ploop即可得到可执行程序
pthreads
- CPU
- C,C++, Fortran
python
- 可以使用多process,利用multiprocessing 绕开GIL
- 使用numpy的函数,比如dot等函数已经默认使用了并行
如果访问同一个对象,可以释放GIL,使用Cpython
with nogil:
xxxx
- CPython parallelism
并行计算简介

799

被折叠的 条评论
为什么被折叠?



