Authors: Rüdiger Reischuk* and Thomas Zeugmann**
Source: Proc. 16th Annual Symposium on Theoretical Aspects of Computer Science - STACS'99, March 4-6, Trier, Germany, Lecture Notes in Computer Science 1563, pp. 414 - 423, Springer-Verlag 1999.
Abstract. We advocate to analyze the average complexity of learning problems. An appropriate framework for this purpose is introduced. Based on it we consider the problem of learning monomials and the special case of learning monotone monomials in the limit and for on-line predictions in two variants: from positive data only, and from positive and negative examples. The well-known Wholist algorithm is completely analyzed, in particular its average-case behavior with respect to the class of binomial distributions. We consider different complexity measures: the number of mind changes, the number of prediction errors, and the total learning time. Tight bounds are obtained implying that worst case bounds are too pessimistic. On the average learning can be achieved exponentially faster.
Furthermore, we study a new learning model, stochastic finite learning, in which, in contrast to PAC learning, some information about the underlying distribution is given and the goal is to find a correct (not only approximatively correct) hypothesis. We develop techniques to obtain good bounds for stochastic finite learning from a precise average case analysis of strategies for learning in the limit and illustrate our approach for the case of learning monomials.
*
Part of this work was performed while visiting the Department of
Informatics at Kyushu University supported by the Japan Society for the
Promotion of Science under Grant JSPS 29716102.
**
Supported by the Grant-in-Aid for Scientific Research in Fundamental Areas
from the Japanese Ministry of Education, Science, Sports, and Culture under
grant no. 10558047.