On Following the Perturbed Leader in the Bandit Setting

Authors: Jussi Kujala and Tapio Elomaa

Source: Algorithmic Learning Theory, 16th International Conference, ALT 2005, Singapore, October 2005, Proceedings, (Sanjay Jain, Hans Ulrich Simon and Etsuji Tomita, Eds.), Lecture Notes in Artificial Intelligence 3734, pp. 371 - 385, Springer 2005.

Abstract. In an online decision problem an algorithm is at each time step required to choose one of the feasible points without knowing the cost associated with it. An adversary assigns the cost to possible decisions either obliviously or adaptively. The online algorithm, naturally, attempts to collect as little cost as possible. The cost difference of the online algorithm and the best static decision in hindsight is called the regret of the algorithm.
Kalai and Vempala [1] showed that it is possible to have efficient solutions to some problems with a linear cost function by following the perturbed leader. Their solution requires the costs of all decisions to be known.Recently there has also been some progress in the bandit setting, where only the cost of the selected decision is observed. A bound of O(T2/3) on T rounds was first shown by Awerbuch and Kleinberg [2] for the regret against an oblivious adversary and later McMahan and Blum [3] showed that a bound of $O(\sqrt{\ln T} T^{3/4})$ is obtainable against an adaptive adversary.

In this paper we study Kalai and Vempala's model from the viewpoint of bandit algorithms. We show that the algorithm of McMahan and Blum attains a regret of O(T2/3) against an oblivious adversary. Moreover, we show a tighter $O(\sqrt{m\ln m}\sqrt{T})$ bound for the expert setting using m experts.


©Copyright 2005, Springer