Authors: Steffen Lange and Thomas Zeugmann
Source: Journal of Experimental & Theoretical Artificial Intelligence 6, No. 1, 1994, 73 - 94.
The present paper deals with monotonic and dual monotonic language learning from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce always better and better generalizations when fed more and more data on the concept to be learnt.
The three versions of dual monotonicity describe the concept that the inference device has to produce exclusively specializations that fit better and better to the target language. We characterize strong-monotonic, monotonic, weak-monotonic, dual strong-monotonic, dual monotonic and dual weak-monotonic as well as finite language learning from positive and negative data in terms of recursively generable finite sets. Thereby, we elaborate a unifying approach to monotonic language learning by showing that there is exactly one learning algorithm which can perform any monotonic inference task.
©Copyright 1994, Taylor & Francis.