Label Enhancement via Joint Implicit Representation Clustering

Authors: Yunan Lu, Weiwei Li, Xiuyi Jia

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, extensive experiments validate our proposal.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Nanjing University of Science and Technology, China 2College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, China luyn@njust.edu.cn, liweiwei@nuaa.edu.cn, jiaxy@njust.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about open-sourcing the code for the described methodology, nor does it include links to a code repository.
Open Datasets Yes We select six representative real-world LDL datasets from different tasks respectively, and their brief descriptions are shown in Table 1. For Emotion6 and Twitter-LDL, we extract a 168-dimensional feature vector for each instance [Ren et al., 2019]. Besides, we use min-max normalization to preprocess the feature vectors for all datasets to accelerate the convergence. Datasets include SBU-3DFE [Geng, 2016], Emotion6 [Peng et al., 2015], Twitter-LDL [Yang et al., 2017], Movie [Geng, 2016], Scene [Geng et al., 2022], Human Gene [Geng, 2016].
Dataset Splits No The paper states, 'we first randomly dividing dataset (70% for training and 30% for testing)', but does not explicitly provide details for a validation split.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions 'Adam [Kingma and Ba, 2015] is adopted as the optimizer' and 'Res Net-18 [He et al., 2016]' as a backbone neural network, but does not specify version numbers for these or other software dependencies.
Experiment Setup Yes For our JRC and LEIC, k is set to m+1, the dimension of the joint implicit representation is set to 64, λ is selected from {1, 2, , 10}, neural networks f are modeled as linear functions for simplicity, and Adam [Kingma and Ba, 2015] is adopted as the optimizer.