Continuation Path Learning for Homotopy Optimization

Authors: Xi Lin, Zhiyuan Yang, Xiaoyuan Zhang, Qingfu Zhang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental studies on different problems show that our proposed method can significantly improve the performance of homotopy optimization and provide extra helpful information to support better decision-making.
Researcher Affiliation Academia 1Department of Computer Science, City University of Hong Kong. Correspondence to: Xi Lin <xi.lin@my.cityu.edu.hk>.
Pseudocode Yes Algorithm 1 Classical Homotopy Optimization Algorithm
Open Source Code Yes The source code can be found in https://github.com/ Xi-L/CPL.
Open Datasets Yes We first test CPL s performance on three widely-used synthetic test benchmark problems, namely the Ackley function (Ackley, 1987), the Rosenbrock function (Rosenbrock, 1960), and the Himmelblau function (Himmelblau et al., 1972).
Dataset Splits No The paper mentions generating training data and using separate test data for evaluation, but does not explicitly specify the use of a validation dataset split (e.g., 80/10/10 split or specific counts for training, validation, and test sets).
Hardware Specification Yes The CPL training on GPU (RTX-3080) is actually slower than its counterpart on CPU.
Software Dependencies No The paper mentions using "Py Torch" and building on the "POMO codebase", but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes The optimizer we use is Adam with learning rate η = 10 4, weight decay ω = 10 6 and batch size B = 64. At each training epoch, we randomly generate 100, 000 problem instances on the fly as training data, and train the model for 1, 000 epoch.