Refined Incremental Learning

Authors: Steffen Lange and Thomas Zeugmann

Source: ``Proc. 8th Australian Joint Conference on Artificial Intelligence - AI'95,'' (Xin Yao, Ed.), pp. 147 - 154, World Scientific Publ. Co., 1995.

Abstract. The paper provides a systematic study of incremental learning algorithms. The general scenario is as follows. Let c be any concept; then every infinite sequence of elements exhausting c is called positive presentation of c. An algorithmic learner takes as input one element of a positive presentation and its previously made hypothesis at a time, and outputs a new hypothesis about the target concept. The sequence of hypotheses has to converge to a hypothesis correctly describing the target concept. This scenario is referred to as iterative learning.

We refine this scenario by defining and investigating bounded example memory inference and feed-back identification. Bounded example memory and feed-back learning generalizes iterative inference by allowing to store an a priori bounded number of carefully chosen examples and asking whether or not a particular element did already appear in the data provided so far, respectively.

We provide a sufficient condition for iterative learning allowing non-enumerative learning. The learning power of our models is related to one another, and an infinite hierarchy of bounded example memory inference is established. These results nicely contrast previously made attempts to enlarge the learning capabilities of iterative learners (cf.~Osherson, Stob and Weinstein (1986)). In particular, they provide strong evidence that incremental learning is the art of knowing what to overlook. Finally, feed-back learning is more powerful than iterative inference, and its learning power is incomparable to that of bounded example memory inference. Hence, there is no unique way to design superior incremental learning algorithms.


©Copyright 1995 World Scientific Publ. Co.