Monotonic Versus Non-monotonic Language Learning

Authors: Steffen Lange and Thomas Zeugmann

Source: “Nonmonotonic and Inductive Logic, Second International Workshop, Reinhardsbrunn Castle, Germany, December 1991,” (G. Brewka, K.P. Jantke and P.H. Schmitt, Eds.), Lecture Notes in Artificial Intelligence 659 pp. 254 - 269, Springer-Verlag 1993.

Abstract. In the present paper strong-monotonic, monotonic and weak-monotonic reasoning is studied in the context of algorithmic language learning theory from positive as well as from positive and negative data.

Strong-monotonicity describes the requirement to only produce better and better generalizations when more and more data are fed to the inference device. Monotonic learning reflects the eventual interplay between generalization and restriction during the process of inferring a language. However, it is demanded that for any two hypotheses the one output later has to be at least as good as the previously produced one with respect to the language to be learnt. Weak-monotonicity is the analogue of cumulativity in learning theory.

We relate all these notions one to the other as well as to previously studied modes of identification, thereby in particular obtaining a strong hierarchy.

©Copyright 1993 Springer-Verlag