Adaptive Graph Guided Embedding for Multi-label Annotation

Authors: Lichen Wang, Zhengming Ding, Yun Fu

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our model in both conventional multi-label learning and zero-shot learning scenario. Experimental results demonstrate that our approach outperforms other compared state-of-the-art methods.
Researcher Affiliation Academia Lichen Wang , Zhengming Ding , Yun Fu Department of Electrical & Computer Engineering, Northeastern University, Boston, USA College of Computer & Information Science, Northeastern University, Boston, USA wanglichenxj@gmail.com, allanding@ece.neu.edu, yunfu@ece.neu.edu
Pseudocode No The paper describes the optimization steps for its variables (F, S, P) in prose, but does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing the source code for the described methodology, nor does it include any links to a code repository.
Open Datasets Yes SUN Dataset [Patterson and Hays, 2012], CUB Dataset [Wah et al., 2011], AWA Dataset [Lampert et al., 2014], BIRD Dataset [Briggs et al., 2013], EMO Dataset [Trohidis et al., 2008].
Dataset Splits Yes In Multi-label annotation setting, we randomly and evenly split samples into labeled and unlabeled subset. We run our model five times with the randomly generated subsets and report the average performance. 5-fold cross-validation is utilized to select the parameters µ and λ.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications. It only mentions extracting features using "Very Deep Convolution Networks" but no hardware for their model's training/inference.
Software Dependencies No The paper mentions using "Very Deep Convolution Networks" for feature extraction and "KNN" for classification, but does not provide specific version numbers for any software libraries, frameworks, or programming languages used in the implementation.
Experiment Setup Yes 5-fold cross-validation is utilized to select the parameters µ and λ. ris empirically set to 120. While since EMO dataset contains 72-dimensional features, we manually set r = 50 for EMO dataset. ... Our approach contains three major parameters, i.e., projection size r, trade-off parameters µ and λ.