Learning by Minimizing the Sum of Ranked Range

Authors: Shu Hu, Yiming Ying, xin wang, Siwei Lyu

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical results highlight the effectiveness of the proposed optimization framework and demonstrate the applicability of proposed losses using synthetic and real datasets.
Researcher Affiliation Collaboration Shu Hu University at Buffalo, SUNY shuhu@buffalo.edu Yiming Ying University at Albany, SUNY yying@albany.edu Xin Wang Cura Cloud Corporation xinw@curacloudcorp.com Siwei Lyu University at Buffalo, SUNY siweilyu@buffalo.edu
Pseudocode Yes Algorithm 1: DCA for Minimizing So RR
Open Source Code Yes Code available at https://github.com/discovershu/So RR.
Open Datasets Yes We use five benchmark datasets from the UCI [10] and the KEEL [1] data repositories; We use the MNIST dataset[18]
Dataset Splits Yes For each dataset, we first randomly select 50% samples for training, and the remaining 50% samples are randomly split for validation and testing (each contains 25% samples).
Hardware Specification No The paper describes the experimental setup and results but does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No The paper describes the algorithms and their application but does not specify any software dependencies with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9') that would be needed for replication.
Experiment Setup Yes Hyper-parameters C, k, and m are selected based on the validation set. Specifically, parameter C is chosen from {100, 101, 102, 103, 104, 105}, parameter k {1} [0.1 : 0.1 : 1]n, where n is the number of training samples, and parameter m are selected in the range of [1, k).