Authors: Steffen Lange and Thomas Zeugmann
Source: ``Algorithmic Learning Theory, 4th International Workshop on Analogical and Inductive Inference, AII '94, 5th International Workshop on Algorithmic Learning Theory, ALT '94, Reinhardsbrunn Castle, Germany, October 1994, Proceedings,'' (S. Arikawa and K.P. Jantke, Eds.), Lecture Notes in Artificial Intelligence 872, pp. 453 - 468, Springer-Verlag 1994
Abstract. The present paper deals with the learnability of indexed families of uniformly recursive languages from positive data under various postulates of naturalness. In particular, we consider set-driven and rearrangement-independent learners, i.e., learning devices whose output exclusively depends on the range and on the range and length of their input, respectively. The impact of set-drivenness and rearrangement-independence on the behavior of learners to their learning poweris studied in dependence on the hypothesis space the learners may use. Furthermore, we consider the influence of set-drivenness and rearrangement-independence for learning devices that realize the subset principle to different extents. Thereby we distinguish between strong-monotonic, monotonic and weak-monotonic or conservative learning.
The results obtained are twofold. First, rearrangement-independent learning does not constitute a restriction except the case of monotonic learning. Second, we prove that for all but one of the considered learning models set-drivenness is a severe restriction. However, set-driven conservative learning is exactly as powerful as unrestricted conservative learning provided the hypothesis space is appropriately chosen. These results considerably extend previous work done in the field (cf., e.g., Schäfer-Richter (1984) and Fulk (1990)).
©Copyright 1994 Springer-Verlag