Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Gamma Distribution PCA-Enhanced Feature Learning for Angle-Robust SAR Target Recognition

Authors: Chong Zhang, Peng Zhang, Mengke Li

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate ΓPCA model based on two commonly used backbones, Res Net and Vi T, and conduct multiple robustness experiments on the MSTAR benchmark dataset. The experimental results demonstrate that ΓPCA effectively enables the model to withstand substantial distributional discrepancy caused by angle changes.
Researcher Affiliation Academia 1National Key Laboratory of Radar Signal Processing, School of Electronic Engineering, Xidian University, Xi an, China. 2College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China. Correspondence to: Peng Zhang <EMAIL>, Mengke Li <EMAIL>.
Pseudocode Yes Algorithm 1 ΓPCA Algorithm.
Open Source Code Yes The source code is available at https://github.com/Ch Grey/Gamma PCA.
Open Datasets Yes The data set used in this paper is the static ground military target data set, MSTAR (Keydel et al., 1996). To further evaluate the generality of our method, we construct a new dataset from the widely used SAR aircraft target detection dataset, SAR-AIRcraft1.0.
Dataset Splits Yes The data are divided into the training set and validation set with a ratio of 0.8. Azimuth Robustness Test. To simulate the scenario that a wide range of azimuth angles are missing, all models are trained and validated by data from only one azimuth quadrant at depression 17 (e.g., Azimuth 0 -90 , Depression 17 )... For testing, a welltrained model will be tested in the full-azimuth (0 -360 ) testing set at depression 15 . Specifically, 80% of the dataset is allocated for training and the remaining 20% for testing.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments. It only mentions general experimental settings.
Software Dependencies No The paper mentions using Res Net and Vi T as backbones and pre-training on Image Net, but it does not specify any software versions for libraries like PyTorch, TensorFlow, or specific Python versions, which are needed for replication.
Experiment Setup Yes For the backbone, due to the small size of our dataset, all models are pre-trained on Image Net and then fine-tuned on our dataset. All networks only use Resize and Center Crop to preprocess the input data. For the hyperparameters, the ΓPCA part uses L = 2 kernels with a kernel size of k = 17.