Top 150+ Solved Muli-core Architectures and Programming MCQ Questions Answer
Q. The routine ________________ combines data from all processes by addingthem in this case and returning the result to a single process.
a. MPI _ Reduce
b. MPI_ Bcast
c. MPI_ Finalize
d. MPI_ Comm size
Q. The easiest way to create communicators with new groups iswith_____________.
a. MPI_Comm_rank
b. MPI_Comm_create
c. MPI_Comm_Split
d. MPI_Comm_group
Q. _______________ is an object that holds information about the received message,including, for example, it’s actually count.
a. buff
b. count
c. tag
d. status
Q. The _______________ operation similarly computes an element-wise reductionof vectors, but this time leaves the result scattered among the processes.
a. Reduce-scatter
b. Reduce (to-one)
c. Allreduce
d. None of the above
Q. __________________is the principal alternative to shared memory parallelprogramming.
a. Multiple passing
b. Message passing
c. Message programming
d. None of the above
Q. ________________may complete even if less than count elements have beenreceived.
a. MPI_Recv
b. MPI_Send
c. MPI_Get_count
d. MPI_Any_Source
Q. A ___________ is a script whose main purpose is to run some program. In thiscase, the program is the C compiler.
a. wrapper script
b. communication functions
c. wrapper simplifies
d. type definitions
Q. ________________ returns in its second argument the number of processes inthe communicator.
a. MPI_Init
b. MPI_Comm_size
c. MPI_Finalize
d. MPI_Comm_rank
Q. _____________ always blocks until a matching message has been received.
a. MPI_TAG
b. MPI_ SOURCE
c. MPI Recv
d. MPI_ERROR
Q. Communication functions that involve all the processes in a communicatorare called ___________
a. MPI_Get_count
b. collective communications
c. buffer the message
d. nonovertaking
Q. MPI_Send and MPI_Recv are called _____________ communications.
a. Collective Communication
b. Tree-Structured Communication
c. point-to-point
d. Collective Computation
Q. The processes exchange partial results instead of using onewaycommunications. Such a communication pattern is sometimes called a ___________.
a. butterfly
b. broadcast
c. Data Movement
d. Synchronization
Q. A collective communication in which data belonging to a single process is sent to all of the processes in the communicator is called a _________.
a. broadcast
b. reductions
c. Scatter
d. Gather
Q. In MPI, a ______________ can be used to represent any collection of data items in memory by storing both the types of the items and their relative locations in memory.
a. Allgather
b. derived datatype
c. displacement
d. beginning
Q. MPI provides a function, ____________ that returns the number of secondsthat have elapsed since some time in the past.
a. MPI_Wtime
b. MPI_Barrier
c. MPI_Scatter
d. MPI_Comm