Source: Proc. 13th Annual Conference on Computational Learning Theory, June 28th - July 1st, Stanford University, Palo Alto, (N. Cesa-Bianchi and S. Goldman, Eds.) pp. 59 - 68, Morgan Kaufmann Publ. 2000.
Abstract. The present paper deals with the average-case complexity of various algorithms for learning univariate polynomials. For this purpose an appropriate framework is introduced. Based on it, the learnability of univariate polynomials evaluated over the natural numbers and of univariate polynomials defined over finite fields is analyzed.
Our results are manifold. In the first case, convergence is measured not relative to the degree of a polynomial but with respect to a measure that takes the degree and the size of the coefficients into account. Then standard interpolation is proved not to be the best possible algorithm with respect to the average number of examples needed.
In general, polynomials over finite fields are not uniquely specified by their input-output-behavior. Thus, as a new form of data representation the remainders modulo other polynomials is proposed and the expected example complexity is analyzed for a rather rich class of probability distributions.
*This author was supported by the Deutsche
Forschungsgemeinschaft (DFG) under Research Grant no. Ste 967/1-1.
©Copyright 2000, Morgan Kaufmann