Top 150+ Solved Parallel Computing MCQ Questions Answer

From 91 to 105 of 111

Q. Latency is

a. Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)

b. Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.

c. It is the time it takes to send a minimal (0 byte) message from one point to other point

d. None of these

  • c. It is the time it takes to send a minimal (0 byte) message from one point to other point

Q. Domain Decomposition

a. Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)

b. Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.

c. It is the time it takes to send a minimal (0 byte) message from point A to point (B)

d. None of these

  • a. Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)

Q. Functional Decomposition:

a. Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)

b. Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.

c. It is the time it takes to send a minimal (0 byte) message from point A to point (B)

d. None of these

  • b. Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.

Q. Synchronous communications

a. It require some type of “handshaking” between tasks that are sharing dat(A) This can be explicitly structured in code by the programmer, or it may happen at a lower level unknown to the pro- grammer.

b. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.

c. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.

d. It allows tasks to transfer data independently from one another.

  • a. It require some type of “handshaking” between tasks that are sharing dat(A) This can be explicitly structured in code by the programmer, or it may happen at a lower level unknown to the pro- grammer.

Q. Collective communication

a. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.

b. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.

c. It allows tasks to transfer data independently from one another.

d. None of these

  • a. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.

Q. Point-to-point communication referred to

a. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.

b. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.*

c. It allows tasks to transfer data independently from one another.

d. None of these

  • b. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.*

Q. Uniform Memory Access (UMA) referred to

a. Here all processors have equal access and access times to memory

b. Here if one processor updates a location in shared memory, all the other processors know about the update.

c. Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories

d. None of these

  • a. Here all processors have equal access and access times to memory

Q. Asynchronous communications

a. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.

b. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.

c. It allows tasks to transfer data independently from one another.

d. None of these

  • c. It allows tasks to transfer data independently from one another.

Q. Granularity is

a. In parallel computing, it is a qualitative measure of the ratio of computation to communication

b. Here relatively small amounts of computational work are done between communication events

c. Relatively large amounts of computa- tional work are done between communication / synchronization events

d. None of these

  • a. In parallel computing, it is a qualitative measure of the ratio of computation to communication

Q. Coarse-grain Parallelism

a. In parallel computing, it is a qualitative measure of the ratio of computation to communication

b. Here relatively small amounts of computational work are done between communication events

c. Relatively large amounts of computa- tional work are done between communication / synchronization events

d. None of these

  • c. Relatively large amounts of computa- tional work are done between communication / synchronization events

Q. Cache Coherent UMA (CC-UMA) is

a. Here all processors have equal access and access times to memory

b. Here if one processor updates a location in shared memory, all the other processors know about the update.

c. Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories

d. None of these

  • b. Here if one processor updates a location in shared memory, all the other processors know about the update.

Q. Non-Uniform Memory Access (NUMA) is

a. Here all processors have equal access and access times to memory

b. Here if one processor updates a location in shared memory, all the other processors know about the update.

c. Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories

d. None of these

  • c. Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories

Q. In the threads model of parallel programming

a. A single process can have multiple, concurrent execution paths

b. A single process can have single, concurrent execution paths.

c. A multiple process can have single concurrent execution paths.

d. None of these

  • a. A single process can have multiple, concurrent execution paths
Subscribe Now

Get All Updates & News