The Limits of Maxing, Ranking, and Preference Learning
Authors: Moein Falahatgar, Ayush Jain, Alon Orlitsky, Venkatadheeraj Pichapati, Vaishakh Ravindrakumar
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present experiments over simulated data in Section 8 and end with our conclusions in Section 9. |
| Researcher Affiliation | Academia | 1University of California, San Diego. Correspondence to: Venkatadheeraj Pichapati <dheerajpv7@ucsd.edu>. |
| Pseudocode | Yes | Algorithm 1 SOFT-SEQ-ELIM; Algorithm 2 NEAR-OPT-MAX; Algorithm 3 OPT-MAX-LOW; Algorithm 4 OPT-MAX; Algorithm 5 APPROX-PROB |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | No | The experiments are conducted using 'simulated data' as stated in Section 8: 'We present experiments over simulated data in Section 8 and end with our conclusions in Section 9.' There is no mention of a publicly available dataset with concrete access information like a link, DOI, or formal citation. |
| Dataset Splits | No | The paper describes experiments over simulated data but does not provide specific training/validation/test dataset splits, sample counts, or references to predefined splits. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments. It only states that experiments were run on 'simulated data'. |
| Software Dependencies | No | The paper does not provide any specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiments. |
| Experiment Setup | Yes | In all experiments, we use maxing algorithms to find a 0.05-maximum with δ = 0.1. All results presented here are averaged over 1000 runs. |