Can Learning in the Limit Be Done Efficiently?

Author: Thomas Zeugmann

Source: Algorithmic Learning Theory, 14th International Conference, ALT 2003, Sapporo, Japan, October 17 - 19, 2003, Proceedings,'' (Ricard Gavaldà, Klaus P. Jantke and Eiji Takimoto, Eds.), Lecture Notes in Artificial Intelligence 2842, pp. 17 - 38, Springer 2003.

Abstract. Inductive inference can be considered as one of the fundamental paradigms of algorithmic learning theory. We survey results recently obtained and show their impact to potential applications.

Since the main focus is put on the efficiency of learning, we also deal with postulates of naturalness and their impact to the efficiency of limit learners. In particular, we look at the learnability of the class of all pattern languages and ask whether or not one can design a learner within the paradigm of learning in the limit that is nevertheless efficient.

For achieving this goal, we deal with iterative learning and its interplay with the hypothesis spaces allowed. This interplay has also a severe impact to postulates of naturalness satisfiable by any learner.

Finally, since a limit learner is only supposed to converge in the limit, one never knows at any particular learning stage whether or not the learner did already succeed. The resulting uncertainty may be prohibitive in many applications. We survey results to resolve this problem by outlining a new learning model, called stochastic finite learning. Though pattern languages can neither be finitely inferred from positive data nor PAC-learned, our approach can be extended to a stochastic finite learner that exactly infers all pattern languages from positive data with high confidence.



©Copyright 2003, Springer