Subset Selection by Pareto Optimization

Authors: Chao Qian, Yang Yu, Zhi-Hua Zhou

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical study verifies the theoretical results, and exhibits the superior performance of POSS to greedy and convex relaxation methods. We conducted experiments on 12 data sets in Table 1 to compare POSS with the following methods:
Researcher Affiliation Academia Chao Qian Yang Yu Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University Collaborative Innovation Center of Novel Software Technology and Industrialization Nanjing 210023, China
Pseudocode Yes Algorithm 1 Forward Regression. Algorithm 2 POSS.
Open Source Code No The paper mentions the 'Sparse Reg toolbox developed in [28, 27]' which is a third-party tool, but it does not provide explicit access (link or statement of availability) to the authors' implementation code for the POSS method or their experiments.
Open Datasets Yes The data sets are from http://archive.ics.uci.edu/ml/ and http://www.csie.ntu. edu.tw/ cjlin/libsvmtools/datasets/.
Dataset Splits Yes The data set is randomly and evenly split into a training set and a test set.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., CPU/GPU models, memory).
Software Dependencies No The paper mentions using 'Sparse Reg toolbox developed in [28, 27]' but does not provide specific version numbers for this or any other software dependencies.
Experiment Setup Yes For POSS, we use I( ) = 0 since it is generally good, and the number of iterations T is set to be 2ek2n as suggested by Theorem 1. We add the ℓ2 norm regularization into the objective function... We then test all the compared methods to solve this optimization problem with λ = 0.9615 on sonar.