A Guided Tour Across the Boundaries of Learning Recursive Languages

Authors: Steffen Lange and Thomas Zeugmann

Source: “Algorithmic Learning for Knowledge-Based Systems,” (K.P. Jantke and S. Lange, Eds.), Lecture Notes in Artificial Intelligence 961, pp. 193 - 262, Springer-Verlag 1995.


The present paper deals with the learnability of indexed families of uniformly recursive languages from positive data as well as from both, positive and negative data. We consider the influence of various monotonicity constraints to the learning process, and provide a thorough study concerning the influence of several parameters. In particular, we present examples pointing to typical problems and solutions in the field. Then we provide a unifying framework for learning. Furthermore, we survey results concerning learnability in dependence on the hypothesis space, and concerning order independence. Moreover, new results dealing with the efficiency of learning are provided. First, we investigate the power of iterative learning algorithms. The second measure of efficiency studied is the number of mind changes a learning algorithm is allowed to perform. In this setting we consider the problem whether or not the monotonicity constraints introduced do influence the efficiency of learning algorithms.

The paper mainly emphasis to provide a comprehensive summary of results recently obtained, and of proof techniques developed. Finally, throughout our guided tour we discuss the question of what a natural language learning algorithm might look like.

©Copyright 1995, Springer-Verlag.