Top 150+ Solved Neural Network MCQ Questions Answer

From 1 to 15 of 144

Q. A 3-input neuron is trained to output a zero when the input is 110 and a one when the input is 11 After generalization, the output will be zero when and only when the input is:

a. 000 or 110 or 011 or 101

b. 010 or 100 or 110 or 101

c. 000 or 010 or 110 or 100

d. 100 or 111 or 101 or 001

  • c. 000 or 010 or 110 or 100

Q. A perceptron is:

a. a single layer feed-forward neural network with pre-processing

b. an auto-associative neural network

c. a double layer auto-associative neural network

d. a neural network that contains feedback

  • a. a single layer feed-forward neural network with pre-processing

Q. An auto-associative network is:

a. a neural network that contains no loops

b. a neural network that contains feedback

c. a neural network that has only one loop

d. a single layer feed-forward neural network with pre-processing

  • b. a neural network that contains feedback

Q. Which is true for neural networks?

a. it has set of nodes and connections

b. each node computes it’s weighted input

c. node could be in excited state or non-excited state

d. all of the mentioned

  • d. all of the mentioned

Q. Neuro software is:

a. a software used to analyze neurons

b. it is powerful and easy neural network

c. designed to aid experts in real world

d. it is software used by neuro surgeon

  • b. it is powerful and easy neural network

Q. Why is the XOR problem exceptionally interesting to neural network researchers?

a. because it can be expressed in a way that allows you to use a neural network

b. because it is complex binary operation that cannot be solved using neural networks

c. because it can be solved by a single layer perceptron

d. because it is the simplest linearly inseparable problem that exists.

  • d. because it is the simplest linearly inseparable problem that exists.

Q. What is back propagation?

a. it is another name given to the curvy function in the perceptron

b. it is the transmission of error back through the network to adjust the inputs

c. it is the transmission of error back through the network to allow weights to be adjusted so that the network can learn.

d. none of the mentioned

  • c. it is the transmission of error back through the network to allow weights to be adjusted so that the network can learn.

Q. Why are linearly separable problems of interest of neural network researchers?

a. because they are the only class of problem that network can solve successfully

b. because they are the only class of problem that perceptron can solve successfully

c. because they are the only mathematical functions that are continue

d. because they are the only mathematical functions you can draw

  • b. because they are the only class of problem that perceptron can solve successfully

Q. Which of the following is not the promise of artificial neural network?

a. it can explain result

b. it can survive the failure of some nodes

c. it has inherent parallelism

d. it can handle noise

  • a. it can explain result

Q. A perceptron adds up all the weighted inputs it receives, and if it exceeds a certain value, it outputs a 1, otherwise it just outputs a 0.

a. true

b. false

c. sometimes – it can also output intermediate values as well

d. can’t say

  • a. true

Q. The name for the function in question 16 is

a. step function

b. heaviside function

c. logistic function

d. perceptron function

  • b. heaviside function

Q. Having multiple perceptrons can actually solve the XOR problem satisfactorily: this is because each perceptron can partition off a linear part of the space itself, and they can then combine their results.

a. true – this works always, and these multiple perceptrons learn to classify even complex problems.

b. false – perceptrons are mathematically incapable of solving linearly inseparable functions, no matter what you do

c. true – perceptrons can do this but are unable to learn to do it – they have to be explicitly hand-coded

d. false – just having a single perceptron is enough

  • c. true – perceptrons can do this but are unable to learn to do it – they have to be explicitly hand-coded

Q. The network that involves backward links from output to the input and hidden layers is called as ____.

a. self organizing maps

b. perceptrons

c. recurrent neural network

d. multi layered perceptron

  • c. recurrent neural network

Q. Different learning method does not include:

a. memorization

b. analogy

c. deduction

d. introduction

  • d. introduction
Subscribe Now

Get All Updates & News