**Author: Léa Meyer**.

Email: lea@modell.iig.uni-freiburg.de

**Source: ***Theoretical Computer Science* Vol. 185,
No. 1, 1997, 81-128.

**Abstract.**
The present paper deals with probabilistic identification of indexed
families of uniformly recursive languages from positive data under
monotonicity constraints. Thereby, we consider conservative,
strong-monotonic and monotonic probabilistic learning of indexed
families with respect to class comprising, class preserving and proper
hypothesis spaces, and investigate the probabilistic hierarchies in these
learning models.

Earlier results in the field of probabilistic identification established that -
considering function identification - each collection of recursive
functions identifiable with probability *p* > 1/2 is deterministically
identifiable (cf. [50]). In the case of language learning from text, each
collection of recursive languages identifiable from text with probability
*p* > 2/3 is deterministically identifiable (cf. [40]), but when dealing with
the learning models mentioned above, we obtain probabilistic hierarchies
highly structured without a "gap" between the probabilistic and
deterministic learning classes. In the case of proper conservative
probabilistic learning and proper strong-monotonic probabilistic
learning, respectively, we are able to show the probabilistic hierarchies to
be dense in [0, 1]. For proper monotonic probabilistic learning, we show
that the probabilistic hierarchy is dense in (2/3,1]. Considering class
preserving conservative or monotonic probabilistic learning, we show the
corresponding probabilistic hierarchies to be strictly decreasing with
increasing probability. These results extend the previous work
considerably (cf. [50, 51]). For class preserving strong-monotonic
probabilistic learning, we show that every indexed family of uniformly
recursive languages strong-monotonically identifiable with a probability
*p* > 2/3 with respect to a class preserving hypothesis space is
deterministically strong-monotonically identifiable with respect to a
class preserving hypothesis space, i.e., the corresponding probabilistic
hierarchy has a "gap" beginning with *p* = 2/3. In the class comprising
case, each of the investigated probabilistic hierarchies has a "gap". In
particular, we can show for class comprising conservative learning as
well as for learning without additional constraints that probabilistic
identification and team identification are equivalent. This yields discrete
probabilistic hierarchies in these cases.

©Copyright 1997 Elsevier Science