Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
OTIAS: OcTree Implicit Adaptive Sampling for Multispectral and Hyperspectral Image Fusion
Authors: Shangqi Deng, Jun Ma, Liang-Jian Deng, Ping Wei
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Overall, our method achieves state-of-the-art performance on the CAVE and Harvard datasets with 4 and 8 scaling factors, outperforming existing approaches. ... Experimental results demonstrate that our method achieves state-of-the-art performance in the MHIF task. |
| Researcher Affiliation | Academia | 1National Key Laboratory of Human-Machine Hybrid Augmented Intelligence 2Institute of Artificial Intelligence and Robotics, Xi an Jiaotong University 3University of Electronic Science and Technology of China |
| Pseudocode | Yes | Algorithm 1: Pseudo code of OTIAS layer in a Py Torch-like style. |
| Open Source Code | Yes | Code https://github.com/shangqideng/OTIAS |
| Open Datasets | Yes | Datasets. We conduct experiments to assess the performance of our model on the CAVE1 and Harvard2. The CAVE dataset comprises 32 hyperspectral images (HSIs)... The Harvard dataset consists of 77 HSIs... 1https://www.cs.columbia.edu/CAVE/databases/multispectral/ 2http://vision.seas.harvard.edu/hyperspec/index.html |
| Dataset Splits | Yes | The simulated pairs with the associated GTs are randomly divided into training (80%) and testing (20%) sets. |
| Hardware Specification | Yes | The training epochs is fixed into 1000 on a Linux operating system with an NVIDIA RTX4090 GPU (24G). |
| Software Dependencies | Yes | The proposed network is implemented in Py Torch 2.4.0 and Python 3.11. |
| Experiment Setup | Yes | Additionally, the Adam W optimizer (P and Ba 2014) is used during training with a learning rate of 0.0001 to minimize the sum of the absolute difference (ℓ1). The training epochs is fixed into 1000 on a Linux operating system with an NVIDIA RTX4090 GPU (24G). |