Types of Monotonic Language Learning and Their Characterization

Authors: Steffen Lange and Thomas Zeugmann

Source: ``Proc. 5th Annual ACM Conference on Computational Learning Theory,'' 1992, pp. 377 - 390, ACM Press.

Abstract. The present paper deals with strong-monotonic, monotonic and weak-monotonic language learning from positive data as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce always better and better generalizations when fed more and more data on the concept to be learnt.

We characterize strong-monotonic, monotonic, weak-monotonic and finite language learning from positive data in terms of recursively generable finite sets, thereby solving a problem of Angluin (1980). Moreover, we study monotonic inference with iteratively working learning devices which are of special interest in applications. In particular, it is proved that strong-monotonic inference can be performed with iteratively learning devices without limiting the inference capabilities, while monotonic and weak-monotonic inference cannot.


©Copyright 1992 ACM Press