ALT |
Page 1 |

## Learning in the Limit
Learning in the limit is a very general paradigm. For defining it,
we assume any recursively enumerable set c of X
is called a , and any collection
conceptC of concepts is
a , i.e., concept classC is a subset of
2.
^{X}
The information given to the learner are successively growing initial
segments of a to denote the initial segment of the data sequence d of length
j + 1.
Next, we specify what the or, synonymously from positive
data,
and learning from both text, or
synonymously from positive and negative data. In the first case,
the elements of a data sequence are any elements of informantX that belong
to the target concept c. Usually, a text is additionally required
to exhaust the target concept, i.e., range(d) = c.
That is, a positive presentation must contain all the elements belonging
to the target at least once, and none of the elements that do not
belong to c. We use text(c) to denote
the set of all texts for c.
In case of informants, each example, i.e., labeled
d =
(_{j}x,_{j}b), where
_{j}b = 0, if _{j}x does not belong to
_{j}c and b = 1, otherwise.
An informant is usually required to contain every element from
the learning domain _{j}X at least once. That is,
the set of all examples labeled 1 is exhausting the target c,
and the set of all examples labeled 0 is just X\c.
By info(c) we denote the set of all informants
for c.
Note however, that we are going to relax the requirements posed to a text and informant to exhaust the target concept and the learning domain, respectively, by requiring both to contain ``enough information'' to learn. What is meant by enough will be explained later.
A limit learner is an M works as follows.
As inputs it gets incrementally growing segments of a positive
presentation (resp. of an informant) d.
After each new input,
it outputs a hypothesis M(d[j]) from a predefined
hypothesis space H.
Each hypothesis refers to a unique element of the target concept class.
Note that the hypothesis space and the concept class can coincide.
informant) if there is an IIM M such that
for every c in C and every d in text(c)
(resp. d in info(c))
[1]
[2]
By LIM we denote the collection of all concepts classes It can be shown that many interesting concepts classes are learnable in the limit from informant. As far as learning from text is concerned, the situation is more subtle. But before developing these topics further, it should be noted that there are some major
- In the definition given above, the limit learner has always access
to the whole actual initial segment of the data sequence provided.
Clearly, in practice such a learner would run out of space pretty soon.
- Since a limit learner is only supposed to converge,
one never knows at any particular learning
stage whether or not it has already been successful.
Such anmay be not**uncertainty**in many applications.**acceptable** - If one modifies the definition of learning in the limit
by adding the requirement that convergence should be decidable,
one arrives at
. However, the power of finite learning is much smaller than that of learning in the limit.**finite learning** - It seems difficult to define an appropriate measure for the complexity of learning.
Continue to Complexity |