|
Algorithmic Learning Theory
This page is dedicated to all aspects of algorithmic learning theory.
It will be extended, whenever my time permits it. So, if there is something
you miss right now, please come back later and check again. Comments,
suggestions for improvement are welcome. Just send me a mail.
The links on the left hand side will lead you to pages explaining
the overall goal of learning, defining different learning models,
and summarizing results.
The links below provide a couple of pages containing applets for algorithms we
have developed and/or carefully analyzed with respect to their time
(and/or space) complexity.
Related papers we have published.
-
Rüdiger Reischuk and Thomas Zeugmann,
An Average-Case Optimal One-Variable Pattern Language Learner,
Journal of Computer and System Sciences Vol. 60, No. 2, 2000, 302-335.
(Special Issue for COLT '98).
Abstract.
-
Thomas Erlebach,
Peter Rossmanith,
Hans Stadtherr,
Angelika Steger, and Thomas Zeugmann,
Learning one-variable pattern languages very efficiently on average, in
parallel, and by asking queries,
Theoretical Computer Science Vol. 261, Issue 1, 2001, 119-156.
(Special Issue for ALT '97).
Abstract
- Thomas Zeugmann,
From Learning in the Limit to Stochastic Finite Learning
Theoretical Computer Science, Vol. 364, Issue 1, 2006, 77-97.
(Special Issue Algorithmic Learning Theory (ALT 2003))
Abstract.
-
John Case,
Sanjay Jain,
Rüdiger Reischuk,
Frank Stephan, and
Thomas Zeugmann,
Learning a Subclass of Regular Patterns in Polynomial Time,
Theoretical Computer Science, Vol. 364, Issue 1, 2006, 115-131.
(Special Issue Algorithmic Learning Theory (ALT 2003))
Abstract.
|