Top 150+ Solved Parallel Computing MCQ Questions Answer

From 76 to 90 of 111

Q. In CISC architecture most of the complex instructions are stored in _____.

a. register

b. diodes

c. cmos

d. transistors

  • d. transistors

Q. Which of the architecture is power efficient?

a. cisc

b. risc

c. isa

d. iana

  • b. risc

Q. It is the simultaneous use of multiplecompute resources to solve a computational problem

a. Parallel computing

b. Single processing

c. Sequential computing

d. None of these

  • a. Parallel computing

Q. Parallel Execution

a. A sequential execution of a program, one statement at a time

b. Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time

c. A program or set of instructions that is executed by a processor.

d. None of these

  • b. Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time

Q. Scalability refers to a parallel system’s (hardware and/or software) ability

a. To demonstrate a proportionate increase in parallel speedup with the removal of some processors

b. To demonstrate a proportionate increase in parallel speedup with the addition of more processors

c. To demonstrate a proportionate decrease in parallel speedup with the addition of more processors

d. None of these

  • b. To demonstrate a proportionate increase in parallel speedup with the addition of more processors

Q. Parallel computing can include

a. Single computer with multiple processors

b. Arbitrary number of computers connec- ted by a network

c. Combination of both A and B

d. None of these

  • c. Combination of both A and B

Q. Serial Execution

a. A sequential execution of a program, one statement at a time

b. Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time

c. A program or set of instructions that is executed by a processor.

d. None of these

  • a. A sequential execution of a program, one statement at a time

Q. Shared Memory is

a. A computer architecture where all processors have direct access to common physical memory

b. It refers to network based memory access for physical memory that is not common.

  • a. A computer architecture where all processors have direct access to common physical memory

Q. Distributed Memory

a. A computer architecture where all processors have direct access to common physical memory

b. It refers to network based memory access for physical memory that is not common

  • b. It refers to network based memory access for physical memory that is not common

Q. Parallel Overhead is

a. Observed speedup of a code which has been parallelized, defined as: wall-clock time of serial execution and wall-clock time of parallel execution

b. The amount of time required to coordi- nate parallel tasks. It includes factors such as: Task start-up time, Synchro- nizations, Data communications.

c. Refers to the hardware that comprises a given parallel system - having many processors

d. None of these

  • b. The amount of time required to coordi- nate parallel tasks. It includes factors such as: Task start-up time, Synchro- nizations, Data communications.

Q. Massively Parallel

a. Observed speedup of a code which has been parallelized, defined as: wall-clock time of serial execution and wall-clock time of parallel execution

b. The amount of time required to coordinate parallel tasks. It includes factors such as: Task start-up time, Synchronizations, Data communications.

c. Refers to the hardware that comprises a given parallel system - having many processors

d. None of these

  • b. The amount of time required to coordinate parallel tasks. It includes factors such as: Task start-up time, Synchronizations, Data communications.

Q. Fine-grain Parallelism is

a. In parallel computing, it is a qualitative measure of the ratio of computation to communication

b. Here relatively small amounts of computational work are done between communication events

c. Relatively large amounts of computational work are done between communication / synchroni- zation events

d. None of these

  • b. Here relatively small amounts of computational work are done between communication events

Q. In shared Memory

a. Changes in a memory location effected by one processor do not affect all other processors.

b. Changes in a memory location effected by one processor are visible to all other processors

c. Changes in a memory location effected by one processor are randomly visible to all other processors.

d. None of these

  • b. Changes in a memory location effected by one processor are visible to all other processors

Q. In shared Memory:

a. Here all processors access, all memory as global address space

b. Here all processors have individual memory

c. Here some processors access, all memory as global address space and some not

d. None of these

  • a. Here all processors access, all memory as global address space

Q. In shared Memory

a. Multiple processors can operate independently but share the same memory resources

b. Multiple processors can operate independently but do not share the same memory resources

c. Multiple processors can operate independently but some do not share the same memory resources

d. None of these

  • a. Multiple processors can operate independently but share the same memory resources
Subscribe Now

Get All Updates & News