On the Comparison of Inductive Inference Criteria for Uniform Learning of Finite Classes

Author: Sandra Zilles.

Source: Lecture Notes in Artificial Intelligence Vol. 2225, 2001, 251 - 266.

Abstract. We consider a learning model in which each element of a class of recursive functions is to be identified in the limit by a computable strategy. Given gradually growing initial segments of the graph of a function, the learner is supposed to generate a sequence of hypotheses converging to a correct hypothesis. The term correct means that the hypothesis is an index of the function to be learned in a given numbering. Restriction of the basic definition of learning in the limit yields several inference criteria, which have already been compared with respect to their learning power.

The scope of uniform learning is to synthesize appropriate identification strategies for infinitely many classes of recursive functions by a uniform method, i.e. a kind of meta-learning is considered. In this concept we can also compare the learning power of several inference criteria. If we fix a single numbering to be used as a hypothesis space for all classes of recursive functions, we obtain results similar to the non-uniform case. This hierarchy of inference criteria changes, if we admit different hypothesis spaces for different classes of functions. Interestingly, in uniform identification most of the inference criteria can be separated by collections of finite classes of recursive functions.

©Copyright 2001 Springer