Though one-variable pattern languages can neither be finitely inferred from positive data nor PAC-learned, our approach can be extended to a probabilistic finite learner that exactly infers all one-variable pattern languages from positive data with high confidence.
It is a long standing open problem whether or not pattern languages can be learned in case that empty substitutions for the pattern variables are also allowed. Our learning strategy can be generalized to this situation as well.
Finally, we show some experimental results for the behavior
of this new learning algorithm in practice.
The algorithms presented in this paper have been implemented in JAVA. You may try them out using our Average Case Optimal 1-Variable Pattern Language Learning page which is also mirrored in Japan.
This work was performed while this author was visiting the Department of
Informatics at Kyushu University and was supported by the Japan Society for the
Promotion of Science under Grant JSPS 29716102.
©Copyright 2000 Academic Press