ACCELERATING NONCONVEX LEARNING VIA REPLICA EXCHANGE LANGEVIN DIFFUSION

Authors: Yi Chen, Jinglin Chen, Jing Dong, Jian Peng, Zhaoran Wang

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We theoretically analyze the acceleration effect of replica exchange from two perspectives: (i) the convergence in χ2-divergence, and (ii) the large deviation principle. Such an acceleration effect allows us to faster approach the global minima. Furthermore, by discretizing the replica exchange Langevin diffusion, we obtain a discrete-time algorithm. For such an algorithm, we quantify its discretization error in theory and demonstrate its acceleration effect in practice.
Researcher Affiliation Academia Yi Chen Department of Industrial Engineering & Management Science Northwestern University Evanston, IL 60201, USA yichen2016@u.northwestern.edu Jinglin Chen Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801, USA jinglinc@illinois.edu Jing Dong Columbia Business School School Columbia University New York City, NY 10027, USA jing.dong@gsb.columbia.edu Jian Peng Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801, USA jianpeng@illinois.edu Zhaoran Wang Department of Industrial Engineering & Management Science Northwestern University Evanston, IL 60201, USA zhaoran.wang@northwestern.edu
Pseudocode No The paper describes the discrete-time algorithm using equations (3.15) and (3.16) in regular text, but does not present it as a formal pseudocode or algorithm block.
Open Source Code No The paper does not provide any statements about releasing open-source code or links to a code repository.
Open Datasets No The paper focuses on theoretical analysis and does not mention specific datasets or their public availability for training.
Dataset Splits No The paper does not provide details on training/validation/test dataset splits. It focuses on theoretical analysis.
Hardware Specification No The paper does not provide specific details about the hardware used for any practical demonstrations or experiments.
Software Dependencies No The paper does not list any specific software dependencies with version numbers.
Experiment Setup No The paper discusses theoretical parameters like swapping intensity and temperatures, but it does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size) for an empirical training process.