Predicting Label Distribution from Multi-label Ranking
Authors: Yunan Lu, Xiuyi Jia
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we implement extensive experiments to validate our proposal. |
| Researcher Affiliation | Academia | Yunan Lu, Xiuyi Jia School of Computer Science and Engineering Nanjing University of Science and Technology, Nanjing 210094, China {luyn, jiaxy}@njust.edu.cn |
| Pseudocode | Yes | Algorithm 1 Generic DRAM Require: training set {(xn, σn)}N n=1, testing instance x , score function φ, number of mixture components K, number of Monte Carlo samples L; |
| Open Source Code | Yes | The supplemental material provides a detailed instruction for reproducing the main results. |
| Open Datasets | Yes | We adopt several widely used label distribution datasets, including Movie [4], Emotion6 [18], Twitter LDL, and Flickr-LDL [36]. |
| Dataset Splits | Yes | Each method is run for ten times on random dataset partitions (70% for training and 30% for test); the average values and standard derivations are recorded. For our method, we set K = 3 and L = 20, and λ is selected from {10 5, 5 10 5, 10 4, 5 10 4, , 101, 5 101} by five-fold cross-validation. For the above comparison methods, since the label distributions are unavailable during training, the hyperparameter configuration that gives the highest Rho on the validation set will be used. On the other hand, we directly train DM and SA on the ground-truth label distributions for comparison. We refer to these two as GT+DM and GT+SA for short, respectively. For these two comparison methods, the hyperparameter configuration that gives the best Cheb, Canber, Cosine, and Rho on the validation set will be used. |
| Hardware Specification | No | Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No] |
| Software Dependencies | No | The paper mentions software components and algorithms like GL, SA, VI, DM, and L-BFGS, but does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | For our method, we set K = 3 and L = 20, and λ is selected from {10 5, 5 10 5, 10 4, 5 10 4, , 101, 5 101} by five-fold cross-validation. |