Dual Relation Semi-Supervised Multi-Label Learning

Authors: Lichen Wang, Yunyu Liu, Can Qin, Gan Sun, Yun Fu6227-6234

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments illustrate the effectiveness and efficiency of our method.
Researcher Affiliation Academia Lichen Wang, Yunyu Liu, Can Qin, Gan Sun, Yun Fu Northeastern University, Boston, USA {wang.lich, liu.yuny, qin.ca, g.sun}@husky.neu.edu, yunfu@ece.neu.edu
Pseudocode No The paper describes the model components and their mathematical formulations but does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes 1The code is available in: https://github.com/wanglichenxj/Dual-Relation-Semi-supervised-Multi-label-Learning
Open Datasets Yes We evaluate our model on six fine-grained multi-label datasets. ESP Game (Von Ahn and Dabbish 2004)... Corel5K (Duygulu et al. 2002)... IAPRTC-12 (Grubinger et al. 2006)... CUB (Wah et al. 2011)... SUN (Patterson and Hays 2012)... AWA (Lampert, Nickisch, and Harmeling 2014).
Dataset Splits No The paper lists training and testing image counts for each dataset (e.g., ESP Game has 18,689 training images and 2,081 testing images) but does not explicitly provide separate counts or percentages for a validation set.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions deploying 'fully connected networks' and using 'pre-trained VGG Networks' but does not specify programming languages, libraries, or other software dependencies with version numbers.
Experiment Setup Yes Since CR( ) depends on the performance of C1( ) and C2( ), we train C1( ) and C2( ) for 50 to 100 iterations before we involve the pseudo label strategy in the complete training procedure. Compared with single-label learning, we cannot simply determine the confidence of the predicted labels. To handle this problem, we fully utilize the two-classifier structure and measure the prediction differences between C1( ) and C2( ). Specifically, all target data are sent to C1( ), C2( ) and CR( ) and achieve the predictions. The prediction differences are calculated by Eq. (10).