Active Learning in the Non-realizable CaseAuthor: Matti Kääriäinen Source: Algorithmic Learning Theory, 17th International Conference, ALT 2006, Barcelona, October 2006, Proceedings, (José L. Balcázar, Phil Long and Frank Stephan, Eds.), Lecture Notes in Artificial Intelligence 4264, pp. 63 - 77, Springer 2006. Abstract. Most of the existing active learning algorithms are based on the realizability assumption: The learner's hypothesis class is assumed to contain a target function that perfectly classifies all training and test examples. This assumption can hardly ever be justified in practice. In this paper, we study how relaxing the realizability assumption affects the sample complexity of active learning. First, we extend existing results on query learning to show that any active learning algorithm for the realizable case can be transformed to tolerate random bounded rate class noise. Thus, bounded rate class noise adds little extra complications to active learning, and in particular exponential label complexity savings over passive learning are still possible. However, it is questionable whether this noise model is any more realistic in practice than assuming no noise at all. Our second result shows that if we move to the truly non-realizable model of statistical learning theory, then the label complexity of active learning has the same dependence Ω(1/ε2) on the accuracy parameter ε as the passive learning label complexity. More specifically, we show that under the assumption that the best classifier in the learner's hypothesis class has generalization error at most β > 0, the label complexity of active learning is Ω(β2/ε2 log(1/δ)), where the accuracy parameter ε measures how close to optimal within the hypothesis class the active learner has to get and δ is the confidence parameter. The implication of this lower bound is that exponential savings should not be expected in realistic models of active learning, and thus the label complexity goals in active learning should be refined. ©Copyright 2006, Springer |