On approximate learning by multilayered feedforward circuitsAuthors: Bhaskar DasGupta and Barbara Hammer
Source:
Theoretical Computer Science Vol. 348, Issue 1,
December 2005, pp. 95127
Abstract. We deal with the problem of efficient learning of feedforward neural networks. First, we consider the objective to maximize the ratio of correctly classified points compared to the size of the training set. We show that it is NPhard to approximate the ratio within some constant relative error if architectures with varying input dimension, one hidden layer, and two hidden neurons are considered where the activation function in the hidden layer is the sigmoid function, and the situation of epsilonseparation is assumed, or the activation function is the semilinear function. For single hidden layer threshold networks with varying input dimension and n hidden neurons, approximation within a relative error depending on n is NPhard even if restricted to situations where the number of examples is limited with respect to n. Afterwards, we consider the objective to minimize the failure ratio in the presence of misclassification errors. We show that it is NPhard to approximate the failure ratio within any positive constant for a multilayered threshold network with varying input dimension and a fixed number of neurons in the hidden layer if the thresholds of the neurons in the first hidden layer are zero. Furthermore, even obtaining weak approximations is almost NPhard in the same situation.

©Copyright 2005 Elsevier Science B.V.