Multi-Label Learning with Pairwise Relevance Ordering
Authors: Ming-Kun Xie, Sheng-Jun Huang
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical studies on multiple datasets and metrics validate the effectiveness of the proposed method. |
| Researcher Affiliation | Academia | Ming-Kun Xie and Sheng-Jun Huang College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, 211106 {mkxie, huangsj}@nuaa.edu.cn |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for their methodology is open-source or publicly available. |
| Open Datasets | Yes | We evaluate our method on five multi-label datasets: Multi-MNIST2 [Finn et al., 2017], Multi-Kuzushiji-MNIST (Multi-KMNIST for short), Multi-Fashion-MNIST 3 (Multi-FMNIST for short), VOC2007 4 [Everingham et al., 2010] and MSCOCO 5 [Lin et al., 2014]. |
| Dataset Splits | Yes | For three Multi MNIST-style datasets, we randomly sample 6,000 images for training and 4,000 images for testing. VOC2007 contains 9,963 images for 20 object categories, which are divided into train, val and test sets. [...] MSCOCO contains 82,081 images as the training set and 40,504 images as the validation set. We randomly sample 20,000 images from the training set for training and 10,000 images from the validation set for testing. |
| Hardware Specification | Yes | All the experiments are conducted on Ge Force RTX 2080 GPUs |
| Software Dependencies | No | The paper mentions 'Pytorch platform [Paszke et al., 2019]' and 'Adam [Kingma and Ba, 2015]' optimizer, but does not provide specific version numbers for PyTorch or other software libraries. |
| Experiment Setup | Yes | For experiments on Multi-MNIST-style datasets, we train a linear model by using Adam [Kingma and Ba, 2015] optimizer with learning rate of 0.001. We added an ℓ2-regularization term, with the regularization parameter of 0.0001. [...] For experiments on VOC2007 and MSCOCO, we use an Alexnet [Krizhevsky et al., 2012] and a Resnet-18 [He et al., 2016] pre-trained with the ILSVRC2012 dataset on Pytorch platform [Paszke et al., 2019]. The Alexnet and Resnet-18 are trained by using stochastic gradient descent (SGD) with learning rate of 0.0001. An ℓ2-regularization term is added with the regularization parameter of 0.0001. The batch size for all datasets is set as 200. |