Fast Rank-1 Lattice Targeted Sampling for Black-box Optimization

Authors: Yueming LYU

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on challenging benchmark test functions and black-box prompt fine-tuning for large language models demonstrate the query efficiency of our RLTS technique. We first evaluate our RLTS on challenging benchmark test functions: Rosenbrock, Rastrigin, and Nesterov.
Researcher Affiliation Academia Yueming LYU Centre for Frontier AI Research (CFAR) Institute of High Performance Computing (IHPC) Agency for Science, Technology and Research (A*STAR) 1 Fusionopolis Way, #16-16 Connexis, Singapore 138632 Lyu_Yueming@cfar.a-star.edu.sg
Pseudocode Yes Algorithm 1 Fast Coordinate Search and Algorithm 2 Rank-1 Lattice Targeted Sampling
Open Source Code No The paper mentions using "publicly available code3" for a backbone model (Sun et al., 2022a), but there is no explicit statement or link indicating that the authors' own RLTS implementation is open-source.
Open Datasets Yes Six benchmark datasets for different language tasks are employed for evaluation: DBpedia, SS2, SNLI, AG s News, MRPC and RTE. The SST2 [Socher et al., 2013] dataset is a dataset for the sentiment analysis task. AG s News and DBPedia datasets [Zhang et al., 2015] are used for topic classification tasks. SNLI [Bowman et al., 2015] and RTE [Wang et al., 2019] are employed for natural language inference. MRPC dataset [Dolan and Brockett, 2005] is used for the paraphrasing task.
Dataset Splits No The paper specifies batch sizes and number of independent runs for experiments but does not explicitly provide details about training/validation/test dataset splits, percentages, or sample counts.
Hardware Specification Yes All the experiments are performed in 50 runs on a single NVIDIA A40 Card.
Software Dependencies No The paper mentions "Pytorch" as a deep learning toolbox and "cma" package, but it does not provide specific version numbers for these software dependencies.
Experiment Setup Yes For all the methods, we initialize the µ = 0. For INGO and RLTS, we set the step-size parameter β = 0.2 in all experiments. For RLTS, we set the parameter η = 1 in all experiments. We initialized Σ = I for all the methods. The number of epochs of training is set to 2000. The number of iterations of fast coordinate search is set to T = 50.