PAC learning with nasty noise
Authors: Nader H. Bshouty *,
Nadav Eiron** and
Eyal Kushilevitz
Source: Theoretical Computer Science Vol. 288, Issue 2, 17 September 2002, pp. 255 - 275.
Abstract.
We introduce a new model for learning in the presence of noise, which we call
the Nasty Noise model. This model generalizes previously considered
models of learning with noise. The learning process in this model, which is a
variant of the PAC model, proceeds as follows: Suppose that the learning
algorithm during its execution asks for m examples. The examples that
the algorithm gets are generated by a nasty adversary that works according to
the following steps. First, the adversary chooses m examples
(independently) according to a fixed (but unknown to the learning algorithm)
distribution D as in the PAC-model. Then the powerful adversary, upon
seeing the specific m examples that were chosen (and using his knowledge
of the target function, the distribution D and the learning algorithm),
is allowed to remove a fraction of the examples at its choice, and replace
these examples by the same number of arbitrary examples of its choice;
the m modified examples are then given to the learning algorithm.
The only restriction on the adversary is that the number of examples that
the adversary is allowed to modify should be distributed according to a
binomial distribution with parameters
On the negative side, we prove that no algorithm can achieve accuracy
of
*This research was supported by the fund for the promotion of research at the Technion. Part of this research was done at the University of Calgary, Calgary, Alberta, Canada. **Some of this research was done while this author was a graduate student in the Department of Computer Science, Technion, Haifa, Israel.
|
©Copyright 2002 Elsevier Science B.V.