Derivative-Free Optimization via Classification

Authors: Yang Yu, Hong Qian, Yi-Qi Hu

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the testing functions as well as on the machine learning tasks including spectral clustering and classification with Ramp loss demonstrate the effectiveness of RACOS. We then conduct experiments comparing RACOS with some state-of-the-art derivative-free optimization methods on optimization testing functions and machine learning tasks including spectral clustering and classification with Ramp loss. The experiment results show that RACOS is superior to the compared methods.
Researcher Affiliation Academia Yang Yu and Hong Qian and Yi-Qi Hu National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China {yuy,qianh,huyq}@lamda.nju.edu.cn
Pseudocode Yes Algorithm 1 classification-based optimization, Algorithm 2 The randomized coordinate shrinking classification algorithm for X = {0, 1}n or [0, 1]n
Open Source Code Yes The codes of RACOS can be found from http://cs.nju.edu.cn/yuy.
Open Datasets Yes Five binary UCI datasets (Blake, Keogh, and Merz 1998) are employed: Sonar, Heart, Ionosphere, Breast Cancer and German, with 208, 270, 351, 683 and 1000 instances, respectively. Blake, C. L.; Keogh, E.; and Merz, C. J. 1998. UCI Repository of machine learning databases. [http://www.ics.uci.edu/ mlearn/MLRepository.html].
Dataset Splits No The paper mentions using UCI datasets and specific parameters for the RACOS algorithm (e.g., m=100), but does not explicitly provide details on how the datasets were split into training, validation, or test sets, nor does it specify any cross-validation strategy.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models, or cloud computing resources.
Software Dependencies No The paper refers to various algorithms and their implementations but does not specify any software names with version numbers, such as programming languages or libraries, used for the experiments.
Experiment Setup Yes We use the same fixed parameters for RACOS in all the following experiments: in Algorithm 1 we set λ = 0.95, m = 100, and αt is set so that only the best solution is positive, and in Algorithm 2 we set M = 1. set the maximum number of function evaluations to be 30n for all algorithms. All features are normalized into [-1, 1]. (or [0, 1]). (using the bit-wise mutation with probability 1/n and one-bit crossover with probability 0.5).