Can Learning in the Limit be Done Efficiently?
(invited lecture for ALT 2003)
Author: Thomas Zeugmann
Affiliation: Institute for Theoretical Computer Science,
University at Lübeck, Lübeck,
Germany
Abstract.
Inductive inference can be considered as one of the fundamental
paradigms of algorithmic learning theory. We survey results recently
obtained and show their impact to potential applications.
\par
Since the main focus is put on the efficiency of learning, we also
deal with postulates of naturalness and their impact to the efficiency
of limit learners.
In particular, we look at the learnability of
the class of all pattern languages and ask whether or not one
can design a learner within the paradigm of learning in the limit
that is nevertheless efficient.
\par
For achieving this goal, we deal with iterative learning and its
interplay with the hypothesis spaces allowed. This interplay has
also a severe impact to postulates of naturalness satisfiable by
any learner.
\par
Finally, since a limit learner is only supposed to converge in the
limit, one never knows at any particular learning stage whether or not
the learner did already succeed. The resulting uncertainty may be
prohibitive in many applications. We survey results to resolve this problem
by outlining a new learning model, called stochastic finite learning.
Though pattern languages can neither be finitely inferred
from positive data nor PAC-learned,
our approach can be extended to a stochastic finite learner that
exactly infers all pattern languages from positive data
with high confidence.
©Copyright 2003 SPRINGER
|