Distributed Markov Chain Monte Carlo Sampling primarily based on the Alternating Course Methodology of Multipliers
Authors: Alexandros E. Tzikas, Licio Romao, Mert Pilanci, Alessandro Abate, Mykel J. Kochenderfer
Summary: Many machine studying functions require working on a spatially distributed dataset. Regardless of technological advances, privateness concerns and communication constraints could stop gathering the whole dataset in a central unit. On this paper, we suggest a distributed sampling scheme primarily based on the alternating course methodology of multipliers, which is often used within the optimization literature resulting from its quick convergence. In distinction to distributed optimization, distributed sampling permits for uncertainty quantification in Bayesian inference duties. We offer each theoretical ensures of our algorithm’s convergence and experimental proof of its superiority to the state-of-the-art. For our theoretical outcomes, we use convex optimization instruments to determine a basic inequality on the generated native pattern iterates. This inequality permits us to point out convergence of the distribution related to these iterates to the underlying goal distribution in Wasserstein distance. In simulation, we deploy our algorithm on linear and logistic regression duties and illustrate its quick convergence in comparison with current gradient-based strategies.