**Author: Peter Rossmanith**.

**Source: ***Lecture Notes in Artificial Intelligence* Vol. 1720,
1999, 132 - 144.

**Abstract.**
Learning in the limit deals mainly with the question of *what* can
be learned, but not very often with the question of *how fast*.
The purpose of this paper is to develop a learning model that stays
very close to Gold's model, but enables questions on the
speed of convergence to be answered.
In order to do this, we have to assume that
positive examples are generated by some stochastic model. If the
stochastic model is fixed (measure one learning), then all recursively
enumerable sets are identifiable, while straying greatly from Gold's
model. In contrast, we define *learning from random text* as
identifying a class of languages for *every* stochastic model where
examples are generated independently and identically distributed.
As it turns out, this model stays close to learning in the limit.
We compare both models keeping several aspects in mind, particularly
when restricted to several strategies and to the existence of locking
sequences. Lastly, we present some results on the speed of
convergence: In general, convergence can be arbitrarily slow, but for
recursive learners, it cannot be slower than some magic function.
Every language can be learned with *exponentially small tail
bounds*, which are also *the best possible*. All results apply
fully to Gold-style learners, since his model is a proper subset of
learning from random text.

©Copyright 1999 Springer-Verlag