Convex Co-embedding

Authors: Farzaneh Mirzazadeh, Yuhong Guo, Dale Schuurmans

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An experimental evaluation reveals the advantages of global training in different case studies.
Researcher Affiliation Academia Farzaneh Mirzazadeh Department of Computing Science University of Alberta Edmonton, AB T6G 2E8, Canada mirzazad@cs.ualberta.ca; Yuhong Guo Department of Computer and Information Sciences, Temple University Philadelphia, PA 19122, USA yuhong@temple.edu; Dale Schuurmans Department of Computing Science University of Alberta Edmonton, AB T6G 2E8, Canada dale@cs.ualberta.ca
Pseudocode No The paper describes algorithms and methods but does not include a clearly labeled pseudocode block or algorithm figure.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The paper uses the multi-label data sets Corel5K, Emotion, Mediamill, Scene, and Yeast (Table 1), which are well-known benchmark datasets. For the tag recommendation, it uses 'Bib Sonomy' data, following '(J aschke et al. 2008) we exploit the core at level 10 subsample'.
Dataset Splits No The paper states: 'we used 1000 examples for training and the rest for testing (except Emotion where we used a 2/3 train-test split), repeating 10 times for different random splits.' It does not explicitly mention a separate validation split.
Hardware Specification No The paper does not provide any specific details regarding the hardware used for running its experiments.
Software Dependencies No The paper does not provide specific details about ancillary software, such as library names with version numbers.
Experiment Setup Yes The paper specifies hyperparameters such as regularization parameters λ (e.g., 'a common regularization parameter λ = λ1 = λ2 to the trace and squared Frobenius norm regularizers'), mentions specific loss functions ('smoothed version (28) of the large margin multi-label loss (27)', 'ranking logistic loss function'), and discusses initialization strategies ('random initialization', 'initializing with all 0s', 'initializing from all 1s'). Specific λ values are provided in Table 2 and Table 3.