### Generalization Error of Linear Neural Networks in
Unidentifiable Cases

**Author: Kenji Fukumizu**.

**Source: ***Lecture Notes in Artificial Intelligence* Vol. 1720,
1999, 51 - 62.

**Abstract.**
The statistical asymptotic theory is often used in theoretical results in
computational and statistical learning theory. It describes the limiting
distribution of the maximum likelihood estimator (MLE) as an normal distribution.
However, in layered models such as neural networks, the regularity condition
of the asymptotic theory is not necessarily satisfied. The true parameter is
not identifiable, if the target function can be realized by a network of smaller
size than the size of the model. There has been little known on the behavior of
the MLE in these cases of neural networks. In this paper, we analyze the
expectation of the generalization error of three-layer linear neural networks,
and elucidate a strange behavior in unidentifiable cases. We show that the
expectation of the generalization error in the unidentifiable cases is larger
than what is given by the usual asymptotic theory, and dependent on the rank
of the target function.

©Copyright 1999 Springer-Verlag