Multi-objective Optimization by Learning Space Partition

Authors: Yiyang Zhao, Linnan Wang, Kevin Yang, Tianjun Zhang, Tian Guo, Yuandong Tian

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, on the Hyper Volume (HV) benchmark, a popular MOO metric, La MOO substantially outperforms strong baselines on multiple real-world MOO tasks, by up to 225% in sample efficiency for neural architecture search on Nasbench201, and up to 10% for molecular design.
Researcher Affiliation Collaboration Yiyang Zhao Worcester Polytechnic Institute Linnan Wang Brown University Kevin Yang UC Berkeley Tianjun Zhang UC Berkeley Tian Guo Worcester Polytechnic Institute Yuandong Tian Facebook AI Research
Pseudocode Yes Algorithm 1 La MOO Pseudocode.
Open Source Code No No explicit statement about providing concrete access to source code for the methodology described in this paper was found.
Open Datasets Yes Nasbench201 is a public benchmark to evaluate NAS algorithms (Dong & Yang, 2020).
Dataset Splits No No specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning was found.
Hardware Specification No No specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments were found.
Software Dependencies No No specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment were found.
Experiment Setup Yes For all problems, we leverage polynomials as the kernel type of SVM and the degree of the polynomial kernel function is set to 4. The minimum samples in the leaf of MCTS is 10. The cp is roughly set to 10% of maximum of hypervolume... Hyperparameters of q EHVI and q Par EGO: The number of q is set to 5. The acquisition function is optimized with L-BFGS-B (with a maximum of 200 iterations). In each iteration, 256 raw samples used for the initialization heuristic are generated to be selected by the acquisition function.