The Last-Step Minimax Algorithm

Authors: Eiji Takimoto and Manfred Warmuth.

Source: Lecture Notes in Artificial Intelligence Vol. 1968, 2000, 279 - 290.

Abstract. We consider on-line density estimation with a parameterized density from an exponential family. In each trial t the learner predicts a parameter thetat. Then it receives an instance xt chosen by the adversary and incurs loss -ln p(xt | thetat) which is the negative log-likelihood of xt w.r.t. the predicted density of the learner. The performance of the learner is measured by the regret defined as the total loss of the learner minus the total loss of the best parameter chosen off-line. We develop an algorithm called the Last-step Minimax Algorithm that predicts with the minimax optimal parameter assuming that the current trial is the last one. For one-dimensional exponential families, we give an explicit form of the prediction of the Last-step Minimax Algorithm and show that its regret is O(ln T), where T is the number of trials. In particular, for Bernoulli density estimation the Last-step Minimax Algorithm is slightly better than the standard Krichevsky-Trofimov probability estimator.

©Copyright 2000 Springer