Accelerated Spectral Ranking
Authors: Arpit Agarwal, Prathamesh Patil, Shivani Agarwal
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on several real world and synthetic datasets confirm that our new ASR algorithm is indeed orders of magnitude faster than existing algorithms. |
| Researcher Affiliation | Academia | 1Department of Computer and Information Science, University of Pennsylvania, Philadelphia, USA. |
| Pseudocode | Yes | Algorithm 1 ASR |
| Open Source Code | Yes | 4code available: https://github.com/agarpit/asr |
| Open Datasets | Yes | We conducted experiments on the You Tube dataset (Shetty, 2012), GIF-anger dataset (Rich et al.), and the SFwork and SFshop (Koppelman & Bhat, 2006) datasets. |
| Dataset Splits | No | The paper generates synthetic data and uses real-world datasets for evaluation, but it does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or cloud instance specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions the use of the 'power method' for implementation but does not provide specific ancillary software details, such as library names with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | In our experiments we selected n = 500, and the weight wi of each item i [n] was drawn uniformly at random from the range (0, 1); the weights were then normalized so they sum to 1. A comparison graph Gc was generated according to each of the graph topologies above. The parameter L was set to 300 log2 n. The winner for each comparison set was drawn according to the MNL model with weights w. The convergence criterion for all algorithms was the same: we run the algorithm until the L1 distance between the new estimates and the old estimates is 0.0001. Here, we give results when the regularization parameter λ is set to 0.2, and defer the results for other parameter values to the supplementary material. |