10,000+ Times Accelerated Robust Subset Selection
Authors: Feiyun Zhu, Bin Fan, Xinliang Zhu, Ying Wang, Shiming Xiang, Chunhong Pan
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on ten benchmark datasets verify that our method not only outperforms state of the art methods, but also runs 10,000+ times faster than the most related method. |
| Researcher Affiliation | Academia | Institute of Automation, Chinese Academy of Sciences {fyzhu, bfan, ywang, smxiang and chpan}@nlpr.ia.ac.cn, zhuxinliang2012@ia.ac.cn |
| Pseudocode | Yes | Algorithm 1 for (13): A = ARSSA (X, V, P, IL, β) Input: X, V, P, IL, β 1: if N L then 2: update A via the updating rule (14), that is 3: A = β V + βXT X 1XT P. 4: else if N > L then 5: update A via the updating rule (15), that is 6: A = B (IL + XB) 1P, where B = β XV 1 T . 7: end if Output: A |
| Open Source Code | No | The paper does not provide any concrete access to source code for the described methodology. |
| Open Datasets | Yes | Brief descriptions of ten benchmark datasets are summarized in Table 2, where Total(N ) denotes the total set of samples in each data. |
| Dataset Splits | No | The paper states: 'The top 200 representative samples are selected for training.' and 'The remainder (except candidate set) are used for test.' but does not explicitly provide details about validation splits, percentages, or the methodology for partitioning data into training, validation, and test sets. |
| Hardware Specification | Yes | All experiments are conducted on a server with 64-core Intel Xeon E7-4820 @ 2.00 GHz, 18 Mb Cache and 0.986 TB RAM, using Matlab 2012. |
| Software Dependencies | Yes | using Matlab 2012. |
| Experiment Setup | No | The paper mentions general experimental settings like selecting the top 200 representative samples for training, but it does not provide specific hyperparameter values (e.g., learning rate, batch size) or detailed system-level training configurations. |