Optimizing Discrete Spaces via Expensive Evaluations: A Learning to Search Framework

Authors: Aryan Deshwal, Syrine Belakaria, Janardhan Rao Doppa, Alan Fern3773-3780

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide a concrete instantiation of L2S-DISCO for local search procedure and empirically evaluate it on diverse real-world benchmarks. Results show the efficacy of L2S-DISCO over state-of-the-art algorithms in solving complex optimization problems.
Researcher Affiliation Academia 1School of EECS, Washington State University 2School of EECS, Oregon State University {aryan.deshwal, syrine.belakaria, jana.doppa}@wsu.edu, alan.fern@oregonstate.edu
Pseudocode Yes Algorithm 1 Bayesian Optimization framework
Open Source Code No We employed open-source python implementations of both BOCS 2 and SMAC 3. We leveraged existing code1 for our purpose. (Footnotes 1, 2, 3 refer to external libraries/baselines, not the authors' own L2S-DISCO implementation). There is no explicit statement from the authors that they are releasing their code for L2S-DISCO.
Open Datasets Yes We employ five diverse benchmark domains for our empirical evaluation. 1. Contamination. (Hu et al. 2010; Baptista and Poloczek 2018)... 2. Sparsification of zero-field Ising models. (Baptista and Poloczek 2018)... 3. Low auto-correlation binary sequences (LABS). (Packebusch and Mertens 2015)... 4. Network optimization in multicore chips... rodinia benchmark (Che et al. 2009) and uses the gem5-GPU simulator (Power et al. 2014)... 5. Core placement optimization in multicore chips... rodinia benchmark (Che et al. 2009).
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, or testing. It only mentions initializing the surrogate with 20 random structures.
Hardware Specification Yes BOCS took one hour per single BO iteration on a machine with Intel Xeon(R) 2.5Ghz CPU and 96 GB memory.
Software Dependencies No The paper mentions using 'python implementations', 'random forest model' with 'scikit-learn library', and 'Rank Net' but does not provide specific version numbers for these software dependencies (e.g., Python version, scikit-learn version).
Experiment Setup Yes We initialize the surrogate of all the methods by evaluating 20 random structures. For L2S-DISCO. we employed random forest model with 20 trees... and two different acquisition functions (EI and UCB). For UCB, we use the adaptive rate recommended by (Srinivas et al. 2010) to set the exploration and exploitation trade-off parameter βi value depending on the iteration number i. We ran L2S-DISCO (Algorithm 2) for a maximum of 60 iterations.