Publications by Lawrence Carin.

Papers Published

  1. Zhang, J; Zhao, Y; Zhang, R; Carin, L; Chen, C, Variance reduction in stochastic particle-optimization sampling, 37th International Conference on Machine Learning, ICML 2020, vol. PartF168147-15 (January, 2020), pp. 11244-11253 .
    (last updated on 2024/12/31)

    Abstract:
    Stochastic particle-optimization sampling (SPOS) is a recently-developed scalable Bayesian sampling framework unifying stochastic gradient MCMC (SG-MCMC) and Stein variational gradient descent (SVGD) algorithms based on Wasserstein gradient flows. With a rigorous nonasymptotic convergence theory developed, SPOS can avoid the particle-collapsing pitfall of SVGD. However, the variance-reduction effect in SPOS has not been clear. In this paper, we address this gap by presenting several variancereduction techniques for SPOS. Specifically, we propose three variants of variance-reduced SPOS, called SAGA particle-optimization sampling (SAGA-POS), SVRG particle-optimization sampling (SVRG-POS) and a variant of SVRGPOS which avoids full gradient computations, denoted as SVRG-POS+. Importantly, we provide non-Asymptotic convergence guarantees for these algorithms in terms of the 2-Wasserstein metric and analyze their complexities. The results show our algorithms yield better convergence rates than existing variance-reduced variants of stochastic Langevin dynamics, though more space is required to store the particles in training. Our theory aligns well with experimental results on both synthetic and real datasets.

x