Authors: Steffen Lange and Thomas Zeugmann
Source: “Analogical and Inductive Inference, AII '92, Dagstuhl Castle, Germany, October 1992, Proceedings,” (K.P. Jantke, ed.), Lecture Notes in Artificial Intelligence 642, pp. 244 - 259, Springer-Verlag 1992.
Abstract. The present paper deals with strong-monotonic, monotonic and weak-monotonic language learning from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce always better and better generalizations when fed more and more data on the concept to be learnt.
We characterize strong-monotonic, monotonic, weak-monotonic and finite language learning from positive and negative data in terms of recursively generable finite sets. Thereby, we elaborate a unifying approach to monotonic language learning by showing that there is exactly one learning algorithm which can perform any monotonic inference task.
©Copyright 1992 Springer-Verlag