Dictionary Learning in Optimal Metric Space

Authors: Jiexi Yan, Cheng Deng, Xianglong Liu

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show the efficiency of our proposed method, and a better performance can be derived in real-world image clustering applications.
Researcher Affiliation Academia 1School of Electronic Engineering, Xidian University, Xian 710071, China 2Beihang University, Beijing 100191, China
Pseudocode Yes Algorithm 1 The K-SVD algorithm to solve problem (8), Algorithm 2 Algorithm to solve problem (9), Algorithm 3 Algorithm to solve problem (7)
Open Source Code No The paper does not provide any specific links to source code repositories, nor does it state that the code for the described methodology is publicly available or included in supplementary materials.
Open Datasets Yes We experiment with seven benchmark data sets including five face recognition benchmark data sets ORL (Samaria and Harter 1994), UMIST (Phillips, Bruce, and Soulie 1998), PIE (Sim, Baker, and Bsat 2002), FERET (Phillips et al. 1998), JAFFE (Lyons et al. 1998) and two objective recognition benchmark data sets COIL-20 (Nene et al. 1996), ETH-80 (Leibe and Schiele 2003)
Dataset Splits No The paper does not explicitly provide specific train/validation/test dataset splits (e.g., percentages, sample counts, or citations to predefined splits) needed to reproduce the experiment, though it mentions using benchmark datasets and generating side information.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., Python 3.8, PyTorch 1.9), needed to replicate the experiment.
Experiment Setup No The paper does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or detailed training configurations for the proposed model.