Author: Thomas Zeugmann
Source:
Theoretical Computer Science, Vol. 364, Issue 1, 2006, 77-97.
(Special Issue Algorithmic Learning Theory (ALT 2003)). Abstract. Inductive inference can be considered as one of the fundamental paradigms of algorithmic learning theory. We survey results recently obtained and show their impact to potential applications. Since the main focus is put on the efficiency of learning, we also deal with postulates of naturalness and their impact to the efficiency of limit learners. In particular, we look at the learnability of the class of all pattern languages and ask whether or not one can design a learner within the paradigm of learning in the limit that is nevertheless efficient. For achieving this goal, we deal with iterative learning and its interplay with the hypothesis spaces allowed. This interplay has also a severe impact to postulates of naturalness satisfiable by any learner. Furthermore, since a limit learner is only supposed to converge, one never knows at any particular learning stage whether or not the learner did already succeed. The resulting uncertainty may be prohibitive in many applications. We survey results to resolve this problem by outlining a new learning model, called stochastic finite learning. Though pattern languages can neither be finitely inferred from positive data nor PAC-learned, our approach can be extended to a stochastic finite learner that exactly infers all pattern languages from positive data with high confidence. Finally, we apply the techniques developed to the problem of learning conjunctive concepts.
©Copyright 2006, Elsevier Science B.V. |