본문 바로가기
Parallel Programming

MPI Function Dictionary

by suminhan 2016. 10. 4.

MPI Function의 parameter와 MPI_Datatype, MPI_Reduce functions등을 나름대로 정리한 글입니다.

MPI에 더 자세히 정리된 문서는 KISTI Lecture Note를 참고하세요: http://ap2.khu.ac.kr/download/mpi_lec.pdf

또는 각 Function들을 나열한 문서: MPI Routines 참조


Bone functions:

- MPI_Init(&argc, &argv)

- MPI_Comm_rank(MPI_Comm comm, int *rank)

- MPI_Comm_size(MPI_Comm comm, int *size)

- MPI_Finalize()


MPI_Datatype:

- MPI_CHARsigned char

- MPI_SHORT: signed short int

- MPI_INT: signed int

- MPI_LONG: signed long int

- MPI_UNSIGNED_CHAR: unsigned char

- MPI_UNSIGNED_SHORT: unsigned short int

- MPI_UNSIGNED: unsigned int

- MPI_UNSIGNED_LONG: unsigned long int

- MPI_FLOAT: float

- MPI_DOUBLE: double

- MPI_LONG_DOUBLE: long double

- MPI_BYTE: struct 데이터를 옮길때 특히 사용됨

- MPI_PACKED


MPI 통신 모드:

- 동기 송신(MPI_Ssend / MPI_Issend): 동기적으로 송신 - 수신측이 메시지 전송 완료가 되면 송신 완료

- 준비 송신(MPI_Rsend / MPI_Irsend): 수신측이 준비상태일 것임을 가정하고 송신 시작

- 버퍼 송신(MPI_Bsend / MPI_Ibsend): 버퍼에 복사하여 송신과 수신


MPI 블록킹(계산 멈춤) 통신:

MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)

MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status)

MPI_Sendrecv(const void *sendbuf, int sendcount, MPI_Datatype sendtype,

                int dest, int sendtag,

                void *recvbuf, int recvcount, MPI_Datatype recvtype,

                int source, int recvtag,

                MPI_Comm comm, MPI_Status *status)

- MPI_ANY_SOURCE: 모든 프로세스로부터 메세지 수신

- MPI_ANY_TAG: 어떤 꼬리표를 단 메시지든 모든 수신

MPI_STATUS_IGNORE: status를 입력하지 않아도 될 경우 이 값으로 처리

MPI_PROC_NULL: dest나 source에 입력하면 해당 send나 receive명령의 효력이 사라짐

- MPI_Probe(int source, int tag, MPI_Comm comm, MPI_Status *status): blocking test

- MPI_Get_count(MPI_Status *status, MPI_Datatype datatype, int *count): 수신된 메시지 원소 개수


MPI 논블록킹(계산 계속) 통신:

- MPI_Isend(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)

- MPI_Irecv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request *request)

- MPI_Wait(MPI_Request *request, MPI_Status *status)

MPI_Waitall(int count, MPI_Request array_of_requests[], MPI_Status array_of_statuses[])

- MPI_Test(MPI_Request *request, int *flag, MPI_Status *status)


MPI 집합통신:

출처: KISTI Lecture Note

- MPI_Bcast(void *buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm)

- MPI_Gather(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)

- MPI_Gatherv(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int array_of_recvcounts[], int array_of_recvindexes[], MPI_Datatype recvtype, int root ,MPI_Comm comm)

- MPI_Allgather(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)

- MPI_Allgatherv(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int array_of_recvcounts[], int array_of_recvindexes[], MPI_Datatype recvtype, MPI_Comm comm)

- MPI_Reduce(void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm)


* MPI_REDUCE: 연산과 데이터 타입

- MPI_SUM(sum)

- MPI_PROD(product)

- MPI_MAX(maximum)

- MPI_MIN(minimum)

- MPI_MAXLOC(max value and location)

- MPI_MINLOC(min value and location)

* MPI_MAXLOC, MPI_MINLOC에 사용된 데이터 타입:

- MPI_FLOAT_INT: {MPI_FLOAT, MPI_INT}

- MPI_DOUBLE_INT: {MPI_DOUBLE, MPI_INT}

- MPI_LONG_INT: {MPI_LONG, MPI_INT}

- MPI_2INT: {MPI_INT, MPI_INT}

- MPI_SHORT_INT: {MPI_SHORT, MPI_INT}

- MPI_LONG_DOUBLE_INT: {MPI_LONG_DOUBLE, MPI_INT}

- MPI_LAND(logical AND)

- MPI_LOR(logical OR)

- MPI_LXOR(logical XOR)

- MPI_BAND(bitwise AND)

- MPI_BOR(bitwise OR)

- MPI_BXOR(bitwise XOR)


- MPI_Allreduce(void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)

- MPI_Scatter(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)

- MPI_Scatterv(void *sendbuf, int array_of_sendcountses[], int array_of_sendindexes[], MPI_Datatype sendtype, void *recvbuf, int recvcount MPI_Datatype recvtype, int root, MPI_Comm comm)

- MPI_Barrier(MPI_Comm comm)

- MPI_Alltoall(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)

- MPI_Alltoallv(void *sendbuf, int array_of_sendcounts[], int array_of_sendindexes[], MPI_Datatype sendtype, void *recvbuf, int array_of_recvcounts[], int array_of_recvindexes[], MPI_Datatype recvtype, MPI_Comm comm)

- MPI_Reduce_scatter(void *sendbuf, void *recvbuf, int recvcounts, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)

- MPI_Scan(void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)


'Parallel Programming' 카테고리의 다른 글

KSC 2015 3번 문제 및 풀이  (0) 2016.10.01
KSC 2015 2번 문제 및 답안  (0) 2016.10.01
KSC 2015 1번 문제 및 답안  (0) 2016.10.01
KSC 2014 3번 문제 및 답안  (0) 2016.10.01
KSC 2014 2번 문제 및 답안  (0) 2016.10.01

댓글