Consistency Conditions for Inductive Inference of Recursive Functions

Authors: Yohji Akama and Thomas Zeugmann

Source: New Frontiers in Artificial Intelligence, JSAI 2006 Conference and Workshops, Tokyo, Japan, June 2006, Revised Selected Papers, (Takashi Washio, Ken Satoh, Hideaki Takeda, and Akihiro Inokuchi, Eds.), Lecture Notes in Artificial Intelligence 4384, pp. 251-264, Springer 2007.

Abstract. A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem.

Therefore, in the present paper several variations of consistent learning are introduced and studied. These variations allow a so-called δ-delay relaxing the consistency demand to all but the last δ data.

Additionally, we introduce the notion of coherent learning (again with δ-delay) requiring the learner to correctly reflect only the last datum (only the n-δth datum) seen.

Our results are threefold. First, it is shown that all models of coherent learning with δ-delay are exactly as powerful as their corresponding consistent learning models with δ-delay. Second, we provide characterizations for consistent learning with δ-delay in terms of complexity. Finally, we establish strict hierarchies for all consistent learning models with δ-delay in dependence on δ.


©Copyright 2007, Springer