**Authors: Ryan Rifkin, Massimiliano Pontil and Alessandro Verri**.

**Source: ***Lecture Notes in Artificial Intelligence* Vol. 1720,
1999, 252 - 263.

**Abstract.**
When training Support Vector Machines (SVMs) over non-separable data sets,
one sets the threshold *b* using any dual cost coefficient that is
strictly between the bounds of 0 and *C*.
We show that there exist SVM training problems with dual optimal solutions
with all coefficients at bounds, but that *all* such problems are
degenerate in the sense that the ``optimal separating hyperplane''
s given by *w* = 0, and the resulting (degenerate) SVM will classify all
future points identically (to the class that supplies more training data).
We also derive necessary and sufficient conditions on the input data for this to
occur. Finally, we show that an SVM training problem can always be made
degenerate by the addition of a *single* data point belonging to a
certain unbounded polyhedron, which we
characterize in terms of its extreme points and rays.

©Copyright 1999 Springer-Verlag