On the learnability of recursively enumerable
languages from good examples
Authors: Sanjay Jain, Steffen Lange and Jochen Nessel
Email: sanjay@comp.nus.edu.sg
Source: Theoretical Computer Science Vol. 261, Issue 1, 17 June 2001, pp. 3-29.
Abstract.
The present paper investigates identification of indexed families
of recursively enumerable languages from
good examples. We distinguish class-preserving
learning from good examples (the good examples have to be generated with respect
to a hypothesis space having the same range as )
and class-comprising learning from good examples
(the good examples have to be selected with respect to a hypothesis space
comprising the range of ). A learner is required
to learn a target language on every finite superset of the good examples for it.
If the learner's first and only conjecture is correct then the underlying
learning model is referred to as finite identification from good
examples and if the learner makes a finite number
of incorrect conjectures before always outputting a correct one, the model is
referred to as limit identification from good examples.
In the context of
class-preserving learning, it is shown that the learning power of finite and limit
identification from good text examples coincide. When class comprising learning
from good text examples is concerned, limit identification is strictly more
powerful than finite learning. Furthermore, if learning from good informant
examples is considered, limit identification is superior to finite identification in
the class preserving as well as in the class-comprising case. Finally, we relate the
models of learning from good examples to one another as well as to the standard
learning models in the context of Gold-style language learning.
|