Pixel-Aware Deep Function-Mixture Network for Spectral Super-Resolution

Authors: Lei Zhang, Zhiqiang Lang, Peng Wang, Wei Wei, Shengcai Liao, Ling Shao, Yanning Zhang12821-12828

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on three benchmark HSI datasets demonstrate the superiority of the proposed method.
Researcher Affiliation Collaboration 1Inception Institute of Artificial Intelligence, United Arab Emirates 2School of Computer Science, Northwestern Polytechnical University, China 3School of Computing and Information Technology, University of Wollongong, Australia
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing code or links to a code repository.
Open Datasets Yes In this study, we adopt three benchmark HSI datasets, including NTIRE (Timofte et al. 2018), CAVE (Yasuma et al. 2010) and Harvard (Chakrabarti and Zickler 2011).
Dataset Splits No The paper specifies training and testing sets but does not explicitly mention a separate validation set or cross-validation for hyperparameter tuning.
Hardware Specification No The paper states it was 'implemented... on the Pytorch platform' but does not provide any specific hardware details like GPU/CPU models or memory.
Software Dependencies No The paper mentions 'Pytorch platform (Ketkar 2017)' but does not provide a specific version number for PyTorch or any other software dependencies with version numbers.
Experiment Setup Yes In the proposed method, we adopt 4 FM blocks (i.e., including Fc and p=3). Each block contains n = 3 basis functions. The basis functions and the mixing functions consist of m=2 convolutional blocks. Each convolutional block contains 64 filters. In each FM block, basis functions are equipped with 3 different-sized filters for convolution, i.e., 3 3, 7 7 and 11 11. While the filter size in all other convolutional blocks is fixed as 3 3. ... In the training stage, we employ the Adam optimizer (Kingma and Ba 2014) with the weight decay 1e-6. The learning rate is initially set as 1e-4 and halved in every 20 epochs. The batch size is 128. We terminate the optimization at the 100-th epoch.