### Tailoring Representations to Different Requirements

**Author: Katharina Morik**.

**Source: ***Lecture Notes in Artificial Intelligence* Vol. 1720,
1999, 1 - 12.

Designing the representation languages for the input and output of
a learning algorithm is the hardest task within machine learning
applications. Transforming the given representation of observations
into a well-suited language L_{E} may ease learning such
that a simple and efficient learning
algorithm can solve the learning problem. Learnability is defined
with respect to the representation of the output of learning, L_{H}.
If the predictive accuracy is the only
criterion for the success of learning, the
choice of L_{H} means to find the hypothesis space with
most easily learnable concepts, which contains the solution. Additional
criteria for the success of learning such as comprehensibility and
embeddedness may ask for transformations of L_{H} such that users can
easily interpret and other systems can easily exploit the learning
results. Designing a language L_{H} that is optimal with respect to
all the criteria is too difficult a task.
Instead, we design families of representations, where each family
member is well suited for a particular set of requirements, and implement
transformations between the representations.
In this paper, we discuss a representation family of Horn logic.
Work on tailoring representations is illustrated by a robot application.

©Copyright 1999 Springer-Verlag