Authors: Steffen Lange and Thomas Zeugmann
Source: ``Proc. 6th Annual ACM Conference on Computational Learning Theory, 1993,'' pp. 127 - 136, ACM Press.
Abstract. We study the learnability of indexed families of uniformly recursive languages under certain monotonicity constraints. Thereby we distinguish between exact learnability ( has to be learnt with respect to the space of hypotheses), class preserving learning ( has to be inferred with respect to some space of hypotheses having the same range as ), and class comprising inference ( has to be learnt with respect to some space of hypotheses that has a range comprising range()).
In particular, it is proved that, whenever monotonicity requirements are involved, then exact learning is almost always weaker than class preserving inference which itself turns out to be almost always weaker than class comprising learning. Next, we provide additionally insight into the problem under what conditions, for example, exact and class preserving learning procedures are of equal power. Finally, we deal with the question what kind of languages has to be added to the space of hypotheses in order to obtain superior learning algorithms.
©Copyright 1993, ACM Press.