Author: Phil Watson.
Source: Lecture Notes in Artificial Intelligence Vol. 1720, 1999, 145 - 156.
Abstract. The basis of inductive learning is the process of generating and refuting hypotheses. Natural approaches to this form of learning assume that a data item that causes refutation of one hypothesis opens the way for the introduction of a new (for now unrefuted) hypothesis, and so such data items have attracted the most attention. Data items that do not cause refutation of the current hypothesis have until now been largely ignored in these processes, but in practical learning situations they play the key role of corroborating those hypotheses that they do not refute.
We formalise a version of K.R. Popper's concept of degree of corroboration for inductive inference and utilise it in an inductive learning procedure which has the natural behaviour of outputting the most strongly corroborated (non-refuted) hypothesis at each stage. We demonstrate its utility by providing characterisations of several of the commonest identification types in the case of learning from text over class-preserving hypothesis spaces and proving the existence of canonical learning strategies for these types. In many cases we believe that these characterisations make the relationships between these types clearer than the standard characterisations. The idea of learning with corroboration therefore provides a unifying approach for the field.
©Copyright 1999 Springer-Verlag