Authors: Sanjay Jain, Efim Kinber, Steffen Lange, Rolf Wiehagen and Thomas Zeugmann
Source: Theoretical Computer Science Vol. 241, No. 1-2, 2000, 143-189.
Abstract. Learning by erasing means the process of eliminating potential hypotheses from further consideration thereby converging to the least hypothesis never eliminated. This hypothesis must be a solution to the actual learning problem.
The capabilities of learning by erasing are investigated in relation to two factors: the choice of the overall hypothesis space itself and what sets of hypotheses must or may be erased. These learning capabilities are studied for two fundamental kinds of objects to be learned, namely languages and functions.
For learning languages by erasing, the case of learning indexed families is investigated. A complete picture of all separations and coincidences of the considered models is derived. Learning by erasing is compared with standard models of language learning such as learning in the limit, finite learning and conservative learning. The exact location of these types within the hierarchy of the models of learning by erasing is established. Necessary and sufficient conditions for language learning by erasing are presented.
For learning functions by erasing, mainly the case of learning minimal programs is studied. Various relationships and differences between the considered types of function learning by erasing and also to standard function learning are exhibited. In particular, these types are explored in Kolmogorov numberings that can be viewed as natural Gödel numberings of the partial recursive functions. Necessary and sufficient conditions for function learning by erasing are derived.
©Copyright 2000, Elsevier Science