Extreme Learning to Rank via Low Rank Assumption
Authors: Minhao Cheng, Ian Davidson, Cho-Jui Hsieh
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on real world datasets, showing that the algorithm achieves higher accuracy and faster training time compared with state-of-the-art methods. |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of California Davis, USA. 2Department of Statistics, University of California Davis, USA. |
| Pseudocode | Yes | Algorithm 1 Factorization Rank SVM: Computing Uf(U, V ) |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We choose three datasets in our real-world application experiments: (1) Yahoo! Movies User Ratings and Descriptive Content Information V1 01 (2) Het Rec2011-Movie Lens-2K (Cantador et al., 2011). (3) Movie Lens 20M Dataset (Harper & Konstan, 2016). |
| Dataset Splits | Yes | For each ranking task, we randomly split the items into training items and testing items. In the training phase, we use all the pairs between training items to train the model, and in the testing phase we evaluate the prediction accuracy for all the testing-testing item pairs and testing-training item pairs, which is similar with BPR (Rendle et al., 2009). ... We choose the best regularization parameter for each method by a validation set. |
| Hardware Specification | Yes | All the experiments are conducted on a server with an Intel E7-4820 CPU and 256G memory. |
| Software Dependencies | No | The paper mentions software components like 'square hinge loss' and 'gradient descent' but does not provide specific software names with version numbers for reproducibility. |
| Experiment Setup | Yes | We choose the best regularization parameter for each method by a validation set. |