Language Learning with a Bounded Number of Mind Changes

Authors: Steffen Lange and Thomas Zeugmann

Source: “STACS 93, 10th Annual Symposium on Theoretical Ascpects of Computer Science, Würzburg, Germany, February 1993, Proceedings,” (P. Enjalbert, A. Finkel, and K.W. Wagner, Eds.), Lecture Notes in Computer Science 665, pp. 682 - 691, Springer-Verlag 1993.

Abstract. We study the learnability of enumerable families of uniformly recursive languages in dependence on the number of allowed mind changes, i.e., with respect to a well--studied measure of efficiency. We distinguish between exact learnability ( has to be inferred w.r.t. ) and class preserving learning ( has to be inferred w.r.t. some suitable chosen enumeration of all the languages from ) as well as between learning from positive and from both, positive and negative data.

The measure of efficiency is applied to prove the superiority of class preserving learning algorithms over exact learning. We considerably improve results obtained previously and establish two infinite hierarchies. Furthermore, we separate exact and class preserving learning from positive data that avoids overgeneralization.

Finally, language learning with a bounded number of mind changes is completely characterized in terms of recursively generable finite sets. These characterizations offer a new method to handle overgeneralizations and resolve an open question of Mukouchi (1992).


©Copyright 1993 Springer-Verlag