Learning to Optimize in Swarms

Authors: Yue Cao, Tianlong Chen, Zhangyang Wang, Yang Shen

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results over non-convex test functions and the protein-docking application demonstrate that this new meta-optimizer outperforms existing competitors.
Researcher Affiliation Academia Yue Cao, Tianlong Chen, Zhangyang Wang, Yang Shen Departments of Electrical and Computer Engineering & Computer Science and Engineering Texas A&M University, College Station, TX 77840 {cyppsp,wiwjp619,atlaswang,yshen}@tamu.edu
Pseudocode No The paper describes the algorithms and models using text and mathematical equations but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes The codes are publicly available at: https://github.com/Shen-Lab/LOIS.
Open Datasets Yes We choose a training set of 25 protein-protein complexes from the protein docking benchmark set 4.0 [29] (see Supp. Table S1 for the list), each of which has 5 starting points (top-5 models from ZDOCK [30]).
Dataset Splits No The paper does not explicitly mention a validation set or describe a specific data split for validation.
Hardware Specification No Part of the computing time is provided by the Texas A&M High Performance Research. No specific hardware details (e.g., CPU/GPU models, memory) are mentioned.
Software Dependencies No To train our model we use the optimizer Adam which requires gradients. The first-order gradients are calculated numerically through Tensor Flow following [17]. The paper mentions TensorFlow and CHARMM but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes In our implementation the length of LSTM is set to be 20. For all experiments, the optimizer is trained for 10,000 epochs with 100 iterations in each epoch. The population size k of our meta-optimizer and PSO is set to be 4, 10 and 10 in the 2D, 10D and 20D cases, respectively.