Maximizing agreements and coagnostic learningAuthors: Nader H. Bshouty and Lynn Burroughs
Source:
Theoretical Computer Science Vol. 350, Issue 1,
January 2006, pp. 24-39
Abstract. This paper studies α-coagnostic learnability of classes of Boolean formulas. To α-coagnostic learn C from H, the learner seeks a hypothesis h ∈ H whose probability of agreement (rather than disagreement as in agnostic learning) with a labeled example is within a factor α of the best agreement probability achieved by any f ∈ C. Although 1-coagnostic learning is equivalent to agnostic learning, this is not true for α-coagnostic learning for ½ < α < 1. It can be seen that α-coagnostic learnability is equivalent to the α-approximability of the maximum agreement problems. For these problems we are given a labeled sample S and must find an h ∈ H that agrees with as many examples in S as the best f ∈ C does. Many studies have been done on maximum agreement problems, for classes such as monomials, monotone monomials, antimonotone monomials, halfspaces and balls. We further the study of these problems and some extensions of them. For the above classes we improve the best previously known factors α for the hardness of α-coagnostic learning. We also find the first constant lower bounds for decision lists, exclusive-or, halfspaces (over the Boolean domain), 2-term DNF and 2-term multivariate polynomials.
|
©Copyright 2005 Elsevier Science B.V.