Top 50+ Solved Distributed Memory Programming MCQ Questions Answer
Q. MPI specifies the functionality of _________________ communication routines.
a. High-level
b. Low-level
c. Intermediate-level
d. Expert-level
Q. A collective communication in which data belonging to a single process issent to all of the processes in the communicator is called a ________________.
a. Scatter
b. Gather
c. Broadcast
d. Allgather
Q. __________________ is a nonnegative integer that the destination can use toselectively screen messages.
a. Dest
b. Type
c. Address
d. length
Q. The routine ________________ combines data from all processes by addingthem in this case and returning the result to a single process.
a. MPI _ Reduce
b. MPI_ Bcast
c. MPI_ Finalize
d. MPI_ Comm size
Q. The easiest way to create communicators with new groups iswith_____________.
a. MPI_Comm_rank
b. MPI_Comm_create
c. MPI_Comm_Split
d. MPI_Comm_group
Q. _______________ is an object that holds information about the received message,including, for example, it’s actually count.
a. buff
b. count
c. tag
d. status
Q. The _______________ operation similarly computes an element-wise reductionof vectors, but this time leaves the result scattered among the processes.
a. Reduce-scatter
b. Reduce (to-one)
c. Allreduce
d. None of the above
Q. __________________is the principal alternative to shared memory parallelprogramming.
a. Multiple passing
b. Message passing
c. Message programming
d. None of the above
Q. ________________may complete even if less than count elements have beenreceived.
a. MPI_Recv
b. MPI_Send
c. MPI_Get_count
d. MPI_Any_Source
Q. A ___________ is a script whose main purpose is to run some program. In thiscase, the program is the C compiler.
a. wrapper script
b. communication functions
c. wrapper simplifies
d. type definitions
Q. ________________ returns in its second argument the number of processes inthe communicator.
a. MPI_Init
b. MPI_Comm_size
c. MPI_Finalize
d. MPI_Comm_rank
Q. _____________ always blocks until a matching message has been received.
a. MPI_TAG
b. MPI_ SOURCE
c. MPI Recv
d. MPI_ERROR
Q. Communication functions that involve all the processes in a communicatorare called ___________
a. MPI_Get_count
b. collective communications
c. buffer the message
d. nonovertaking
Q. MPI_Send and MPI_Recv are called _____________ communications.
a. Collective Communication
b. Tree-Structured Communication
c. point-to-point
d. Collective Computation