Scale-invariant unconstrained on-line studying
Authors: Wojciech Kotłowski
Summary: We think about a variant of on-line convex optimization through which each the situations (enter vectors) and the comparator (weight vector) are unconstrained. We exploit a pure scale invariance symmetry in our unconstrained setting: the predictions of the optimum comparator are invariant below any linear transformation of the situations. Our aim is to design on-line algorithms which additionally get pleasure from this property, i.e. are scale-invariant. We begin with the case of coordinate-wise invariance, through which the person coordinates (options) could be arbitrarily rescaled. We give an algorithm, which achieves primarily optimum remorse sure on this setup, expressed by the use of a coordinate-wise scale-invariant norm of the comparator. We then examine common invariance with respect to arbitrary linear transformations. We first give a unfavourable end result, displaying that no algorithm can obtain a significant sure when it comes to scale-invariant norm of the comparator within the worst case. Subsequent, we praise this end result with a optimistic one, offering an algorithm which “virtually” achieves the specified sure, incurring solely a logarithmic overhead when it comes to the norm of the situations.