Learning Recursive Concepts with AnomaliesAuthors: Gunter Grieser, Steffen Lange and Thomas Zeugmann Source: Algorithmic Learning Theory, 11th International Conference, ALT 2000, Sydney, Australia, December 2000, Proceedings (Hiroki Arimura, Sanjay Jain and Arun Sharma, Eds.), Lecture Notes in Artificial Intelligence 1968, pp. 101 - 115, Springer-Verlag 2000. Abstract. This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios in which the learner is successful if its final hypothesis describes a finite variant of the target concept - henceforth called learning with anomalies. As usual, we distinguish between learning from only positive data and learning from positive and negative data. We investigate the following learning models: finite identification, conservative inference, set-driven learning, and behaviorally correct learning. In general, we focus our attention on the case that the number of allowed anomalies is finite but not a priori bounded. However, we also present a few sample results that affect the special case of learning with an a priori bounded number of anomalies. We provide characterizations of the corresponding models of learning with anomalies in terms of finite tell-tale sets. The varieties in the degree of recursiveness of the relevant tell-tale sets observed are already sufficient to quantify the differences in the corresponding models of learning with anomalies. In addition, we study variants of incremental learning and derive a complete picture concerning the relation of all models of learning with and without anomalies mentioned above. ©Copyright 2000, Springer |