Incremental Learning from Positive Data

Authors: Steffen Lange and Thomas Zeugmann

Source: Journal of Computer and System Science 53, No. 1, 1996, 88 - 103.

Abstract. The present paper deals with a systematic study of incremental learning algorithms. The general scenario is as follows. Let c be any concept; then every infinite sequence of elements exhausting c is called positive presentation of c. An algorithmic learner successively takes as input one element of a positive presentation as well as its previously made hypothesis at a time, and outputs a new hypothesis about the target concept. The sequence of hypotheses has to converge to a hypothesis correctly describing the concept to be learned. This basic scenario is referred to as iterative learning.

Iterative inference can be refined by allowing the learner to store an a priori bounded number of carefully chosen examples resulting in bounded example memory inference (cf. Fulk, Jain and Osherson (1994)). Additionally, feed-back identification is introduced. Now, the learner is enabled to ask whether or not a particular element did already appear in the data provided so far.

Our results are threefold. First, the learning capabilities of the various models of incremental learning are related to previously studied learning models. It is proved that incremental learning can be always simulated by inference devices that are both set-driven and conservative. Second, feed-back learning is shown to be more powerful than iterative inference, and its learning power is incomparable to that of bounded example memory inference which itself extends that of iterative learning, too. In particular, the learning power of bounded example memory always increases if the number of examples the learner is allowed to store is incremented.

Third, a sufficient condition for iterative inference allowing non-enumerative learning is provided.

The results obtained provide strong evidence that there is no unique way to design superior incremental learning algorithms. Instead, incremental learning is the art of knowing what to overlook.


©Copyright 1996, Academic Press.