MPI 的基本数据结构

http://www.lam-mpi.org/tutorials/one-step/datatypes.php

Heterogeneous computing requires that the data constituting a messagebe typed or described somehow so that its machine representation can beconverted between computer architectures. MPI can thoroughly describemessage datatypes, from the simple primitive machine types to complexstructures, arrays and indices.

MPI messaging functions accept a datatype parameter, whose C typedef isMPI_Datatype:

    MPI_Send(void* buf, int count, MPI_Datatype datatype,
	    int dest, int tag, MPI_Comm comm);

Basic Datatypes

Everybody uses the primitive machine datatypes. Some C examples arelisted below (with the corresponding C datatype in parentheses):

    MPI_CHAR (char)
    MPI_INT (int)
    MPI_FLOAT (float)
    MPI_DOUBLE (double)
The count parameter in MPI_Send( ) refers to the number of elements ofthe given datatype, not the total number of bytes.

For messages consisting of a homogeneous, contiguous array of basicdatatypes, this is the end of the datatype discussion. For messagesthat contain more than one datatype or whose elements are not storedcontiguously in memory, something more is needed.

Strided Vector

Consider a mesh application with patches of a 2D array assigned todifferent processes. The internal boundary rows and columns aretransferred between north/south and east/west processes in the overallmesh. In C, the transfer of a row in a 2D array is simple - acontiguous vector of elements equal in number to the number of columnsin the 2D array. Conversely, storage of the elements of a singlecolumn are dispersed in memory; each vector element separated from itsnext and previous indices by the size of one entire row.

An MPI derived datatype is a good solution for a non-contiguous datastructure. A code fragment to derive an appropriate datatype matchingthis strided vector and then transmit the last column is listed below:


#include <mpi.h>

{
    float		mesh[10][20];
    int			dest, tag;
    MPI_Datatype	newtype;

/*
 * Do this once.
 */
    MPI_Type_vector(10,		/* # column elements */
	    1,			/* 1 column only */
	    20,			/* skip 20 elements */
	    MPI_FLOAT,		/* elements are float */
	    &newtype);		/* MPI derived datatype */

    MPI_Type_commit(&newtype);
/*
 * Do this for every new message.
 */
    MPI_Send(&mesh[0][19], 1, newtype,
	    dest, tag, MPI_COMM_WORLD);
}


MPI_Type_commit( ) separates the datatypes you really want to save anduse from the intermediate ones that are scaffolded on the way to somevery complex datatype.

A nice feature of MPI derived datatypes is that once created, they canbe used repeatedly with no further set-up code. MPI has many otherderived datatype constructors.

C Structure

Consider an imaging application that is transferring fixed length scanlines of eight bit color pixels. Coupled with the pixel array is thescan line number, an integer. The message might be described in C as astructure:

    struct {
	int		lineno;
	char		pixels[1024];
    } scanline;
In addition to a derived datatype, message packing is a useful methodfor sending non-contiguous and/or heterogeneous data. A code fragmentto pack and send the above structure is listed below:


#include <mpi.h>

{
    unsigned int	membersize, maxsize;
    int			position;
    int			dest, tag;
    char		*buffer;
/*
 * Do this once.
 */
    MPI_Pack_size(1, 		/* one element */
	    MPI_INT,		/* datatype integer */
	    MPI_COMM_WORLD,	/* consistent comm. */
	    &membersize);	/* max packing space req'd */

    maxsize = membersize;
    MPI_Pack_size(1024, MPI_CHAR, MPI_COMM_WORLD, &membersize);
    maxsize += membersize;
    buffer = malloc(maxsize);
/*
 * Do this for every new message.
 */
    position = 0;

    MPI_Pack(&scanline.lineno,	/* pack this element */
	    1,			/* one element */
	    MPI_INT,		/* datatype int */
	    buffer,		/* packing buffer */
	    maxsize,		/* buffer size */
	    &position,		/* next free byte offset */
	    MPI_COMM_WORLD);	/* consistent comm. */

    MPI_Pack(scanline.pixels, 1024, MPI_CHAR,
	    buffer, maxsize, &position, MPI_COMM_WORLD);

    MPI_Send(buffer, position, MPI_PACKED,
	    dest, tag, MPI_COMM_WORLD);
}


A buffer is allocated once to contain the size of the packedstructure. The size must be computed because of implementationdependent overhead in the message. Variable sized messages can behandled by allocating a buffer large enough for the largest possiblemessage. The position parameter to MPI_Pack( ) always returns thecurrent size of the packed buffer.

A code fragment to unpack the message, assuming a receive buffer hasbeen allocated, is listed below:


{
    int             src;
    int             msgsize;
    MPI_Status      status;

    MPI_Recv(buffer, maxsize, MPI_PACKED,
	    src, tag, MPI_COMM_WORLD, &status);

    position = 0;
    MPI_Get_count(&status, MPI_PACKED, &msgsize);

    MPI_Unpack(buffer,		/* packing buffer */
	    msgsize,		/* buffer size */
	    &position,		/* next element byte offset */
	    &scanline.lineno,	/* unpack this element */
	    1,			/* one element */
	    MPI_INT,		/* datatype int */
	    MPI_COMM_WORLD);	/* consistent comm. */

    MPI_Unpack(buffer, msgsize, &position,
	    scanline.pixels, 1024, MPI_CHAR, MPI_COMM_WORLD);
}

You should be able to modify the above code fragments for anystructure. It is completely possible to alter the number of elementsto unpack based on application information unpacked previously in thesame message.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值