Variational Label Enhancement
Authors: Ning Xu, Jun Shu, Yun-Peng Liu, Xin Geng
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The recovery experiment on fourteen label distribution datasets and the predictive experiment on ten multi-label learning datasets validate the advantage of our approach over the state-of-the-art approaches. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 2School of Mathematics and Statistics, Xi an Jiaotong University, Xi an 710049, China. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code for the described methodology, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | There are in total one artiļ¬cial dataset and 13 real-world label distribution datasets1. These real-world datasets (Geng, 2016) collected from biological experiments on the yeast genes, facial expression images, natural scene images and movies, respectively. (1http://palm.seu.edu.cn/xgeng/LDL/index.htm) There are ten MLL datasets2 used in the experiments. (2mulan.sourceforge.net/datasets.html) |
| Dataset Splits | Yes | All the algorithms are tested via ten-fold cross validation. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments. |
| Software Dependencies | No | The paper mentions general methods and model architectures (e.g., MLPs, SGD, Ada Grad) but does not provide specific version numbers for any software dependencies like programming languages or libraries. |
| Experiment Setup | Yes | For LEVI, the MLPs are constructed with three hidden layers, each with 500 hidden units and softplus activation functions. |