Leading Strategies in Competitive On-Line Prediction

Author: Vladimir Vovk

Source: Algorithmic Learning Theory, 17th International Conference, ALT 2006, Barcelona, October 2006, Proceedings, (José L. Balcázar, Phil Long and Frank Stephan, Eds.), Lecture Notes in Artificial Intelligence 4264, pp. 214 - 228, Springer 2006.

Abstract. We start from a simple asymptotic result for the problem of on-line regression with the quadratic loss function: the class of continuous limited-memory prediction strategies admits a “leading prediction strategy” which not only asymptotically performs at least as well as any continuous limited-memory strategy but also satisfies the property that the excess loss of any continuous limited-memory strategy is determined by how closely it imitates the leading strategy. More specifically, for any class of prediction strategies constituting a reproducing kernel Hilbert space we construct a leading strategy, in the sense that the loss of any prediction strategy whose norm is not too large is determined by how closely it imitates the leading strategy. This result is extended to the loss functions given by Bregman divergences and by strictly proper scoring rules.


©Copyright 2006, Springer