Top 150+ Solved Muli-core Architectures and Programming MCQ Questions Answer

From 91 to 105 of 120

Q. Programs that can maintain a constant efficiency without increasing theproblem size are sometimes said to be _______________.

a. weakly scalable

b. strongly scalable

c. send_buf

d. recv_buf

  • b. strongly scalable

Q. Parallelism can be used to increase the (parallel) size of the problemis applicable in ___________________.

a. Amdahl's Law

b. Gustafson-Barsis's Law

c. Newton's Law

d. Pascal's Law

  • b. Gustafson-Barsis's Law

Q. Considering to use weak or strong scaling is part of ______________ inaddressing the challenges of distributed memory programming.

a. Splitting the problem

b. Speeding up computations

c. Speeding up communication

d. Speeding up hardware

  • b. Speeding up computations

Q. Which of the followings is the BEST description of Message PassingInterface (MPI)?

a. A specification of a shared memory library

b. MPI uses objects called communicators and groups to define which collection of processes may communicate with each other

c. Only communicators and not groups are accessible to the programmer only by a "handle"

d. A communicator is an ordered set of processes

  • b. MPI uses objects called communicators and groups to define which collection of processes may communicate with each other

Q. The set of NP-complete problems is often denoted by ____________

a. NP-C

b. NP-C or NPC

c. NPC

d. None of the above

  • b. NP-C or NPC

Q. Pthreads has a nonblocking version of pthreads_mutex_lock called__________

a. pthread_mutex_lock

b. pthread_mutex_trylock

c. pthread_mutex_acquirelock

d. pthread_mutex_releaselock

  • b. pthread_mutex_trylock

Q. What are the algorithms for identifying which subtrees we assign to theprocesses or threads __________

a. breadth-first search

b. depth-first search

c. depth-first search breadth-first search

d. None of the above

  • c. depth-first search breadth-first search

Q. What are the scoping clauses in OpenMP _________

a. Shared Variables & Private Variables

b. Shared Variables

c. Private Variables

d. None of the above

  • a. Shared Variables & Private Variables

Q. The function My_avail_tour count can simply return the ________

a. Size of the process’ stack

b. Sub tree rooted at the partial tour

c. Cut-off length

d. None of the above

  • a. Size of the process’ stack

Q. MPI provides a function ________, for packing data into a buffer of contiguousmemory.

a. MPI_Pack

b. MPI_UnPack

c. MPI_Pack Count

d. MPI_Packed

  • a. MPI_Pack

Q. Two MPI_Irecv calls are made specifying different buffers and tags, but the same sender and request location. How can one determine that the buffer specified in the first call has valid data?

a. Call MPI_Probe

b. Call MPI_Testany with the same request listed twice

c. Call MPI_Wait twice with the same request

d. Look at the data in the buffer and try to determine whether it is

  • c. Call MPI_Wait twice with the same request

Q. Which of the following statements is not true?

a. MPI_lsend and MPI_Irecv are non-blocking message passing routines of MPI

b. MPI_lssend and MPI_Ibsend are non-blocking message passing routines of MPI

c. MPI_Send and MPI_Recv are non-blocking message passing routines of MPI

d. MPI_Ssend and MPI_Bsend are blocking message passing routines of MPI

  • a. MPI_lsend and MPI_Irecv are non-blocking message passing routines of MPI

Q. Which of the following is not valid with reference to Message PassingInterface (MPI)?

a. MPI can run on any hardware platform

b. The programming model is a distributed memory model

c. All parallelism is implicit

d. MPI - Comm - Size returns the total number of MPI processes in specified communication

  • c. All parallelism is implicit
Subscribe Now

Get All Updates & News