Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Subset Selection by Pareto Optimization
Authors: Chao Qian, Yang Yu, Zhi-Hua Zhou
NeurIPS 2015 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical study veri๏ฌes the theoretical results, and exhibits the superior performance of POSS to greedy and convex relaxation methods. We conducted experiments on 12 data sets in Table 1 to compare POSS with the following methods: |
| Researcher Affiliation | Academia | Chao Qian Yang Yu Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University Collaborative Innovation Center of Novel Software Technology and Industrialization Nanjing 210023, China |
| Pseudocode | Yes | Algorithm 1 Forward Regression. Algorithm 2 POSS. |
| Open Source Code | No | The paper mentions the 'Sparse Reg toolbox developed in [28, 27]' which is a third-party tool, but it does not provide explicit access (link or statement of availability) to the authors' implementation code for the POSS method or their experiments. |
| Open Datasets | Yes | The data sets are from http://archive.ics.uci.edu/ml/ and http://www.csie.ntu. edu.tw/ cjlin/libsvmtools/datasets/. |
| Dataset Splits | Yes | The data set is randomly and evenly split into a training set and a test set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., CPU/GPU models, memory). |
| Software Dependencies | No | The paper mentions using 'Sparse Reg toolbox developed in [28, 27]' but does not provide specific version numbers for this or any other software dependencies. |
| Experiment Setup | Yes | For POSS, we use I( ) = 0 since it is generally good, and the number of iterations T is set to be 2ek2n as suggested by Theorem 1. We add the โ2 norm regularization into the objective function... We then test all the compared methods to solve this optimization problem with ฮป = 0.9615 on sonar. |