Authors: Steffen Lange and Thomas Zeugmann
Source: International Journal of Foundations of Computer Science Vol. 4, No. 2, 1993, 157 - 178.
In the present paper we study the learnability of enumerable families
of uniformly recursive languages in dependence on the number
of allowed mind changes, i.e., with respect to a well--studied measure
We distinguish between exact learnability ( has to be inferred w.r.t. ) and class preserving learning ( has to be inferred w.r.t. some suitable chosen enumeration of all the languages from ) as well as between learning from positive and from both, positive and negative data.
The measure of efficiency is applied to prove the superiority of class preserving learning algorithms over exact learning. In particular, we considerably improve results obtained previously and establish two infinite hierarchies. Furthermore, we separate exact and class preserving learning from positive data that avoids overgeneralization. Finally, language learning with a bounded number of mind changes is completely characterized in terms of recursively generable finite sets. These characterizations offer a new method to handle overgeneralizations and resolve an open question of Mukouchi (1992).