iSplit LBI: Individualized Partial Ranking with Ties via Split LBI
Authors: Qianqian Xu, Xinwei Sun, Zhiyong Yang, Xiaochun Cao, Qingming Huang, Yuan Yao
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on simulated and real-world datasets demonstrate that our new approach significantly outperforms state-of-the-art alternatives. |
| Researcher Affiliation | Collaboration | 1Key Lab. of Intelligent Information Processing, Institute of Computing Technology, CAS 2Microsoft Research Asia 3State Key Laboratory of Information Security, Institute of Information Engineering, CAS 4School of Cyber Security, University of Chinese Academy of Sciences 5School of Computer Science and Tech., University of Chinese Academy of Sciences 6Key Laboratory of Big Data Mining and Knowledge Management, CAS 7Peng Cheng Laboratory 8Department of Mathematics, Hong Kong University of Science and Technology |
| Pseudocode | No | The paper describes the i Split LBI algorithm using mathematical equations (9a)-(9d) within the 'Methodology' section. However, it does not present a distinct pseudocode block or a section explicitly labeled as 'Algorithm' or 'Pseudocode'. |
| Open Source Code | No | The paper does not provide any statement about releasing source code for the described methodology, nor does it include a link to a code repository. |
| Open Datasets | Yes | In this dataset, 25 images from human age dataset FG-NET 1 are annotated by a group of volunteers on China Crowds platform. ... 1http://www.fgnet.rsunit.com/. ... We validate our algorithm on simulated data with n = |V | = 20 items and U = 50 annotators. We first generate the true common ranking scores cs N(0, 52). |
| Dataset Splits | No | The paper states: 'Here we split the data into a training set (80% of each user s pairwise comparisons) and a testing set (the remaining 20%).' While it mentions 5-fold cross-validation for parameter-tuning of weak learners for competitors, it does not specify a distinct validation split or detailed cross-validation strategy for its own model's hyperparameter tuning beyond 'early stopping strategy to find an optimal stopping time by cross validation' without detailing the split or process. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as CPU or GPU models, or memory specifications. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies, libraries, or frameworks used in the implementation or experimentation (e.g., Python version, PyTorch/TensorFlow version). |
| Experiment Setup | No | The paper describes the mathematical formulation of the model and general properties of some parameters (e.g., 'the hyper-parameter κ is a damping factor'). However, it does not provide specific hyperparameter values (like learning rate, batch size, number of epochs) or other detailed system-level training settings used in its experiments. |