On Approximate Learning by Multi-layered Feedforward Circuits

Authors: Bhaskar DasGupta and Barbara Hammer.

Source: Lecture Notes in Artificial Intelligence Vol. 1968, 2000, 264 - 278.

Abstract. We consider the problem of efficient approximate learning by multi-layered feedforward circuits subject to two objective functions.

First, we consider the objective to maximize the ratio of correctly classified points compared to the training set size (e.g., see [3,5]). We show that for single hidden layer threshold circuits with n hidden nodes and varying input dimension, approximation of this ratio within a relative error c/n3, for some positive constant c is NP-hard even if the number of examples is limited with respect to n. For architectures with two hidden nodes (e.g., as in [6]), approximating the objective within some fixed factor is NP-hard even if any sigmoid-like activation function in the hidden layer and -separation of the output [19] is considered, or if the semilinear activation function substitutes the threshold function.

Next, we consider the objective to minimize the failure ratio [2]. We show that it is NP-hard to approximate the failure ratio within every constant larger than 1 for a multilayered threshold circuit provided the input biases are zero. Furthermore, even weak approximation of this objective is almost NP-hard.

©Copyright 2000 Springer