Monotonic and Dual Monotonic Language Learning

Authors: Steffen Lange, Thomas Zeugmann and Shyam Kapur

Source: Theoretical Computer Science 155, 1996, 365 - 410.

Abstract. Monotonic and dual monotonic language learning from positive as well as from positive and negative examples is investigated. Three different notions of monotonicity are considered. Each of them reflects an alternative formalization of the requirement that the learner has to produce better and better generalizations when fed more and more data on the concept to be learned. Strong-monotonicity absolutely requires that only better and better generalizations be produced. Monotonic learning reflects the demand that for any two guesses the one output later has to be, with respect to the target language, at least as good as the earlier one. Weak-monotonicity is the analogue in learning theory of cumulativity. The corresponding three versions of dual monotonicity describe the requirement that the inference device only produces specializations that fit the target language better and better. Dual strong-monotonic learning generates a chain of shrinking specializations converging to the target language. Dual monotonicity describes the same goal with respect to the target language and dual weak-monotonic learning is the analogue of the dual of cumulativity. The power of each of these types of monotonic and dual monotonic inference from positive as well as from positive and negative data in the context of algorithmic language learning theory is completely investigated, thereby obtaining strong hierarchies.


©Copyright 1996 Elsevier Science Publishers B.V. (North-Holland).