Authors: Lorenzo Carlucci, John Case, Sanjay Jain, and Frank Stephan
Source: Algorithmic Learning Theory, 16th International Conference, ALT 2005, Singapore, October 2005, Proceedings, (Sanjay Jain, Hans Ulrich Simon and Etsuji Tomita, Eds.), Lecture Notes in Artificial Intelligence 3734, pp. 241 - 255, Springer 2005.
Abstract. U-shaped learning behaviour in cognitive development involves learning, unlearning and relearning. It occurs, for example, in learning irregular verbs. The prior cognitive science literature is occupied with how humans do it, for example, general rules versus tables of exceptions. This paper is mostly concerned with whether U-shaped learning behaviour may be necessary in the abstract mathematical setting of inductive inference, that is, in the computational learning theory following the framework of Gold. All notions considered are learning from text, that is, from positive data. Previous work showed that U-shaped learning behaviour is necessary for behaviourally correct learning but not for syntactically convergent, learning in the limit (= explanatory learning). The present paper establishes the necessity for the whole hierarchy of classes of vacillatory learning where a behaviourally correct learner has to satisfy the additional constraint that it vacillates in the limit between at most k grammars, where k ≥ 1. Non U-shaped vacillatory learning is shown to be restrictive: Every non U-shaped vacillatorily learnable class is already learnable in the limit. Furthermore, if vacillatory learning with the parameter k = 2 is possible then non U-shaped behaviourally correct learning is also possible. But for k = 3, surprisingly, there is a class witnessing that this implication fails.
©Copyright 2005, Springer