Near-Optimum Bounds for Finding out Gaussian Halfspaces with Random Classification Noise
Authors: Ilias Diakonikolas, Jelena Diakonikolas, Daniel M. Kane, Puqian Wang, Nikos Zarifis
Abstract: We analysis the problem of learning primary (i.e., not primarily homogeneous) halfspaces with Random Classification Noise beneath the Gaussian distribution. We arrange nearly-matching algorithmic and Statistical Query (SQ) lower positive outcomes revealing a stunning information-computation gap for this basic downside. Significantly, the sample complexity of this learning downside is Θ˜(d/ε), the place d is the dimension and ε is the excess error. Our constructive outcome’s a computationally surroundings pleasant learning algorithm with sample complexity O~(d/ε+d/(max{p,ε})2), the place p quantifies the bias of the purpose halfspace. On the lower positive aspect, we current that any surroundings pleasant SQ algorithm (or low-degree check out) for the problem requires sample complexity a minimal of Ω(d1/2/(max{p,ε})2). Our lower positive implies that this quadratic dependence on 1/ε is inherent for surroundings pleasant algorithm