Global Non-convex Optimization with Discretized Diffusions

Authors: Murat A. Erdogdu, Lester Mackey, Ohad Shamir

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Figure 1: The left plot shows the landscape of the non-convex, sublinear growth function f(x) = c log(1 + 1 2). The middle and right plots compare the optimization error of gradient descent, the Langevin algorithm, and the discretized diffusion designed in Section 5.1.
Researcher Affiliation Collaboration Murat A. Erdogdu 1,2 erdogdu@cs.toronto.edu 1University of Toronto 2Vector Institute Lester Mackey 3 lmackey@ microsoft.com 3Microsoft Research Ohad Shamir 4 ohad.shamir@weizmann.ac.il 4Weizmann Institute of Science
Pseudocode No The paper describes mathematical equations and procedures but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets No The paper describes theoretical functions and general learning problems (e.g., 'regularized loss minimization') but does not specify the use of any publicly available datasets with access information or formal citations.
Dataset Splits No The paper does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or references to predefined splits).
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models, or cloud instance types) used for running experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup Yes Here, d = 2, c = 10, the inverse temperature γ = 1, the step size = 0.1, and each algorithm is run from the initial point (90, 110).