Uncertainty-aware Graph-based Hyperspectral Image Classification
Authors: Linlin Yu, Yifei Lou, Feng Chen
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of the proposed regularization terms on both EGCN and GPN on three real-world HSIC datasets for OOD and misclassification detection tasks. We present extensive empirical experiments to demonstrate the effectiveness of the proposed regularization terms on both EGCN and GPN, using three real-world HSIC datasets for OOD and misclassification detection, in comparison to five baselines. |
| Researcher Affiliation | Academia | Linlin Yu1, Yifei Lou2, Feng Chen1 1Department of Computer Science, University of Texas at Dallas {linlin.yu,feng.chen}@utdallas.edu 2Department of Mathematics and School of Data Sciences and Society, University of North Carolina at Chapel Hill yflou@unc.edu |
| Pseudocode | Yes | Algorithm 1 Model Optimization for OOD Detection |
| Open Source Code | Yes | The code is available at Git Hub1. 1https://github.com/linlin-yu/uncertainty-aware-HSIC.git |
| Open Datasets | Yes | We use three HSIC datasets for evaluation: the University of Pavia (UP), the University of Houston (UH), and the Kennedy Space Center (KSC). For train/(validation + test) split, we adopt the public challenge split for UH (Debes et al., 2014), the same split for UP as (Hong et al., 2020), and a random split for KSC with 20 nodes for training. The detailed class description and training/validation/test ratio are presented in Table 4 following the same setting as (Hong et al., 2020) 2https://github.com/danfenghong/IEEE_TGRS_GCN. The UH dataset is collected by the Compact Airborne Spectrographic Imager (CASI) and released as a data fusion contest by The IEEE Geoscience and Remote Sensing Society. 3http://www.grss-ieee.org/community/technical-committees/data-fusion/ 2013-ieee-grss-data-fusion-contest/. |
| Dataset Splits | Yes | For train/(validation + test) split, we adopt the public challenge split for UH (Debes et al., 2014), the same split for UP as (Hong et al., 2020), and a random split for KSC with 20 nodes for training. For validation/test split, we use 0.2/0.8. The number of disjoint train/validation/test samples selected from each class used for all the experimental results is presented in Appendix B. |
| Hardware Specification | Yes | The three anomaly detection methods (RGAE, TLRSR, TRDFTVAD) are implemented with Matlab and run on a desktop with Intel Core I7-9700 and 16GB memory. The remaining three (softmax-GCN, EGCN-based, GPN-based) are implemented with Py Torch and tested on a single GPU RTX4090 located on a server with AMD Ryzen Threadripper PRO 5955WX and 256GB memory. |
| Software Dependencies | No | The paper states that the anomaly detection methods are "implemented with Matlab" and the other models are "implemented with Py Torch". However, it does not provide specific version numbers for Matlab, PyTorch, or any other software libraries or dependencies, which are necessary for reproducible descriptions. |
| Experiment Setup | Yes | For GCN-based models, we use two graph convolution layers and 0.5 dropout probability. Following the graph size, KSC, UP, and UH have hidden dimensions of 64, 128, 256, respectively. We use early stopping with the patience of 30, a maximum of 5,000 epochs, and validation cross-entropy as a stop metric. For all models, we use the Adam optimizer, and the learning rate and weight decay are carefully tuned for each dataset. Tables 7, 8, 9, and 10 provide specific hyperparameter values for learning rate, weight decay, and regularization weights (λ1, λ2, λ3) for different models and datasets. |