Page 6 back to the 
Algorithmic Learning Theory Page
Page 1
Page 2
Page 3
Page 4
Page 5

The Wholist Algorithm

The applet below realizes the Algorithm P you have just seen. For using it, first choose your target monomial. Preferably, you didn't choose ``FALSE,'' because in this case it wouldn't be too interesting. Now, generate a sequence of input vectors b and compute the corresponding labels c(b) for the concept c described by your target monomial.

Next, click on the white area below ``Enter next example.'' Then input your first labelled example b, c(b). For example, you may input 000111, 1. Note that comma is essential. Then click Enter, and wait for the response to be shown in the big area. After the respone has been displayed, you may enter the next vector and the next label.

Adding both the vector and the label at the same time is just for your convenience. The applet shows you what the algorithm P has predicted on the current input vector b. If the prediction has been correct, no new hypothesis is computed and thus not displayed. If a prediction error occured, the applet is honestly telling you that, and it displays its new hypothesis. Note that negated variables are displayed by a minus sign, e.g., not x1 is written as -x1.

If you would like to run a new series, please hit the button ``RELOAD THIS APPLET.'' Enjoy!

If you see this text then your browser does not support JAVA.
Hopefully, now you have understood how the Wholist algorithm works by trying it on some small examples. Did you find out how many examples it takes in the best-case and worst-case?

Then you may wonder how many examples are needed on average.

This finishes our introduction. We continue by explaining further learning models. The next model is Learning in the Limit.


This applet has been written by Olaf Trzebin and Thomas Zeugmann. Please report any bug you may find to

Thomas Zeugmann

Mailbox "thomas" at "ist.hokudai.ac.jp"