Supervised Learning and Co-training

Authors: Malte Darnstädt, Hans Ulrich Simon, and Balázs Szörényi

Source: Algorithmic Learning Theory, 22st International Conference, ALT 2011, Espoo, Finland, October 5 - 7, 2011, Proceedings.
Lecture Notes in Artificial Intelligence 6925, pp. 425 - 439, Springer, 2011.

Abstract. Co-training under the Conditional Independence Assumption is among the models which demonstrate how radically the need for labeled data can be reduced if a huge amount of unlabeled data is available. In this paper, we explore how much credit for this saving must be assigned solely to the extra-assumptions underlying the Co-training model. To this end, we compute general (almost tight) upper and lower bounds on the sample size needed to achieve the success criterion of PAC-learning within the model of Co-training under the Conditional Independence Assumption in a purely supervised setting. The upper bounds lie significantly below the lower bounds for PAC-learning without Co-training. Thus, Co-training saves labeled data even when not combined with unlabeled data. On the other hand, the saving is much less radical than the known savings in the semi-supervised setting.

©Copyright 2011, Springer


Valid HTML 4.1