Edge Structure Learning via Low Rank Residuals for Robust Image Classification

Authors: Xiang-Jun Shen, Stanley Ebhohimhen Abhadiomhen, Yang Yang, Zhifeng Liu, Sirui Tian

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments are conducted on several benchmark image datasets, including MNIST, LFW, and COIL100. The results show that the proposed method has clear advantages over compared stateof-the-art (SOTA) methods, such as Low-Rank Embedding (LRE), Low-Rank Preserving Projection via Graph Regularized Reconstruction (LRPP GRR), and Feature Selective Projection (FSP) with more than 2% improvement, particularly in corrupted cases.
Researcher Affiliation Academia 1School of Computer Science and Communication Engineering, Jiang Su University, Jiang Su, 212013, China 2Department of Computer Science, University of Nigeria, Nsukka, Nigeria 3Department of Electronic Engineering, School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China
Pseudocode Yes Algorithm 1: The Algorithm of ESL-LRR.
Open Source Code No The paper does not contain an explicit statement or link to the open-source code for the ESL-LRR methodology.
Open Datasets Yes Dataset: To conduct various experiments, five popular benchmark image datasets are utilized: MNIST1 and USPS 2 for handwritten image experiments, LFW3 and Yale4 for facial image experiments, and COIL1005 for object image experiments. Section. 4.1 briefly describe each dataset. (1http://yann.lecun.com/exdb/mnist/ 2https://www.kaggle.com/bistaumanga/usps-dataset 3http://vis-www.cs.umass.edu/lfw/ 4http://vision.ucsd.edu/content/yale-face-database 5https://www.kaggle.com/jessicali9530/coil100)
Dataset Splits No Note that 70% of samples are randomly selected as training set and 30% as testing set in experiments based on findings in the literature, which reveals that the best results can be obtained from such split.
Hardware Specification Yes And all experiments were performed using Matlab 2020a installed on an AMD Ryzen 9-5950X system with 64GB RAM.
Software Dependencies Yes And all experiments were performed using Matlab 2020a installed on an AMD Ryzen 9-5950X system with 64GB RAM.
Experiment Setup Yes In the case of ESL-LRR, parameters λ1, λ2, and λ3 need tuning to achieve a robust solution based on the objective function of Eq. (6)... a reasonable strategy is adopted from reference (Wen, Xu, and Liu 2020) to find optimum values of the parameters using [0.001, 0.01, 0.1, . . . , 1000] as the candidate set.