On approximate learning by multi-layered feedforward circuits

Authors: Bhaskar DasGupta and Barbara Hammer

Source: Theoretical Computer Science Vol. 348, Issue 1, December 2005, pp. 95-127
(Special Issue Algorithmic Learning Theory (ALT 2000)).

Abstract. We deal with the problem of efficient learning of feedforward neural networks. First, we consider the objective to maximize the ratio of correctly classified points compared to the size of the training set. We show that it is NP-hard to approximate the ratio within some constant relative error if architectures with varying input dimension, one hidden layer, and two hidden neurons are considered where the activation function in the hidden layer is the sigmoid function, and the situation of epsilon-separation is assumed, or the activation function is the semilinear function. For single hidden layer threshold networks with varying input dimension and n hidden neurons, approximation within a relative error depending on n is NP-hard even if restricted to situations where the number of examples is limited with respect to n.

Afterwards, we consider the objective to minimize the failure ratio in the presence of misclassification errors. We show that it is NP-hard to approximate the failure ratio within any positive constant for a multilayered threshold network with varying input dimension and a fixed number of neurons in the hidden layer if the thresholds of the neurons in the first hidden layer are zero. Furthermore, even obtaining weak approximations is almost NP-hard in the same situation.


Keywords: Neural networks; Loading problem; NP-hardness; Approximation


©Copyright 2005 Elsevier Science B.V.