No Regularization Is Needed: Efficient and Effective Incomplete Label Distribution Learning

Authors: Xiang Li, Songcan Chen

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically verify the effectiveness of WIn LDL on ten real-world datasets, and experiments show that it is competitive with state-of-the-art methods in both random and non-random missing scenarios.
Researcher Affiliation Academia 1College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics 2MIIT Key Laboratory of Pattern Analysis and Machine Intelligence {lx90, s.chen}@nuaa.edu.cn
Pseudocode No The paper describes optimization steps using equations (Eq. 10, 11, 12, 13) but does not provide a clearly labeled 'Algorithm' block or 'Pseudocode'.
Open Source Code Yes Code is available at https://github.com/Ever FAITH/WIn LDL
Open Datasets Yes The first five datasets are collected by Geng [Geng, 2016], the sixth to tenth datasets are from [Peng et al., 2015], [Liang et al., 2018], [Yang et al., 2017], [Li and Deng, 2019], [Xie et al., 2015], respectively.
Dataset Splits Yes Each method is run for five times with five random data partitions, and for each partitions, 80% of the data are used for training, and the remaining 20% are used for testing.
Hardware Specification Yes All the methods are running on a Linux server with an Intel Xeon(R) W-2255 3.70GHz CPU and 64GB memory.
Software Dependencies No The paper does not specify any software names with version numbers (e.g., programming language versions, specific libraries, or frameworks).
Experiment Setup No The paper mentions 'max Iter is the maximum iterations, in this paper, fixed at 50' and 'ยต is fixed at 2' for certain algorithm parameters, but it does not provide comprehensive experimental setup details such as learning rates, batch sizes, optimizers, or other typical training configurations.