Q15.Marks: +2.0UGC NET Paper 2: Computer Science 2nd January 2026 Shift 1
Given below are two statements: one is labelled as Assertion A and the other is labelled as Reason R Assertion A: Neural networks are capable of approximating any continuous function to an arbitrary degree of accuracy. Reason R: This is explained by the universal approximation theorem.
In the light of the above statements, choose the most appropriate answer from the options given below
1.Both A and R are correct and R is the correct explanation of A✓ Correct
2.Both A and R are correct but R is NOT the correct explanation of A
3.A is correct but R is not correct
4.A is not correct but R is correct
Solution
The correct answer is Both A and R are correct and R is the correct explanation of A.
Key Points
The universal approximation theorem states that a feedforward neural network with a single hidden layer containing a finite number of neurons can approximate any continuous function on compact subsets of ℝn, under mild assumptions on the activation function.
In the given assertion, it is stated that neural networks can approximate any continuous function to an arbitrary degree of accuracy, which aligns with the universal approximation theorem.
The reason correctly explains the assertion by referencing the universal approximation theorem, making both the assertion and reason valid and the reason the correct explanation of the assertion.
This theorem highlights the power of neural networks in function approximation, making them invaluable in various machine learning and deep learning applications.
However, while the theorem guarantees the existence of an approximation, it does not specify the number of neurons required, the training process, or the computational complexity involved.
Additional Information
Universal Approximation Theorem:
Originally proven for single-layer feedforward networks with sigmoid activation functions.
Extended to other activation functions such as ReLU (Rectified Linear Unit) under certain conditions.
Applicable to continuous functions on compact domains, emphasizing that neural networks are capable of learning complex mappings.
Practical Implications:
The theorem does not guarantee practical feasibility, as the required network size for accurate approximation might be prohibitively large.
Training such networks effectively is challenging and depends on factors like optimization algorithms, data quality, and computational resources.
Limitations of Neural Networks:
While they are powerful approximators, neural networks require a significant amount of data and computational resources for training.
Overfitting can occur if networks are too complex for the given dataset.
Interpretability remains a challenge, as neural networks are often considered "black boxes."
Applications:
Function approximation in engineering and scientific simulations.
Image and speech recognition tasks where complex patterns need to be identified.
Modeling nonlinear relationships in financial forecasting and other domains.