Large-margin Weakly Supervised Dimensionality Reduction

Authors: Chang Xu, Dacheng Tao, Chao Xu, Yong Rui

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on real-world datasets demonstrate the significance of studying dimensionality reduction in the weakly supervised setting and the effectiveness of the proposed framework.
Researcher Affiliation Collaboration Key Lab. of Machine Perception (Ministry of Education), Peking University, Beijing 100871, China Centre for Quantum Computation and Intelligent Systems, University of Technology, Sydney 2007, Australia Microsoft Research, No. 5, Dan Ling Street, Haidian District, Beijing 10080, China
Pseudocode No The paper describes algorithms but does not include any structured pseudocode or algorithm blocks.
Open Source Code No No explicit statement or link providing concrete access to the source code for the methodology was found.
Open Datasets Yes UMIST face dataset (Graham & Allinson, 1998); USPS digit dataset (Hull, 1994); Movie Lens dataset 1http://grouplens.org/datasets/movielens/; The book-crossing dataset 2http://grouplens.org/datasets/book-crossing/; Three datasets 3 (Table 1)
Dataset Splits Yes We used 70% of the movies rated by each user for training and the remaining 30% for testing.; The pairwise accuracies for different algorithms were evaluated on each dataset in a five-fold cross-validation experiment
Hardware Specification No No specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments are mentioned.
Software Dependencies No The paper mentions names of algorithms/techniques (e.g., 'Rank SVM', 'LPP', 'LFDA', 'LMCA', 'GBRT') and references papers describing them, but does not specify software dependencies with version numbers (e.g., specific libraries or frameworks like 'PyTorch 1.9').
Experiment Setup No The paper discusses parameters like 'constant margins γ1 and γ2', 'constant C', 'smooth parameter σ', 'learning rate α', and 'limited depth p' within the algorithm descriptions. However, it does not provide concrete numerical values for these hyperparameters or other system-level training settings used in the experiments.