Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging

Authors: JIAMIAN WANG, Zongliang Wu, Yulun Zhang, Xin Yuan, Tao Lin, Zhiqiang Tao

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate that the proposed Fed HP coordinates the pre-trained model to multiple hardware configurations, outperforming prevalent FL frameworks for 0.35d B under challenging heterogeneous settings.
Researcher Affiliation Academia Jiamian Wang1 , Zongliang Wu2,3, Yulun Zhang4, Xin Yuan2, Tao Lin2, Zhiqiang Tao1 1Rochester Institute of Technology, 2Westlake University, 3Zhejiang University, 4Shanghai Jiao Tong University
Pseudocode Yes Algorithm 1 Fed HP Training Algorithm
Open Source Code Yes Data and code are aveilable at https://github.com/Jiamian-Wang/Fed HP-Snapshot-Compressive-Imaging.git
Open Datasets Yes Following existing practices [Cai et al., 2022b, Lin et al., 2022, Hu et al., 2022, Huang et al., 2021], we adopt the benchmark training dataset of CAVE [Yasuma et al., 2010], which is composed of 32 hyperspectral images with the spatial size as 512 512. [...] We employ the widely-used simulation testing dataset for the quantitative evaluation, which consists of ten 256 256 28 hyperspectral images collected from KAIST [Choi et al., 2017]. Besides, we use the real testing data with spatial size of 660 660 collected by a SD-CASSI system [Meng et al., 2020] for the perceptual evaluation considering the real-world perturbations. [...] We collect and will release the first Snapshot Spectral Heterogeneous Dataset (SSHD) containing a series of practical SCI systems...
Dataset Splits No For the federated learning, we equally split the training dataset according to the number of clients C. The local training dataset are kept and accessed confidentially across clients. Note that one specific coded aperture determines a unique dataset according to (2), the resulting data samples for each client can be much more than 205/C. We employ the widely-used simulation testing dataset for the quantitative evaluation, which consists of ten 256 256 28 hyperspectral images collected from KAIST [Choi et al., 2017]. While training and testing datasets are mentioned, explicit details on a validation set split are not provided.
Hardware Specification Yes We use Py Torch [Paszke et al., 2017] on an NVIDIA A100 GPU.
Software Dependencies No The paper mentions 'Py Torch [Paszke et al., 2017]' but does not provide a specific version number for it or other software dependencies.
Experiment Setup Yes The batch is set as 12. We set the initial learning rate for both of the prompt network and adaptor as αp = αb = 1 10 4 with step schedulers, i.e., half annealing every 2 104 iterations. We train the model with an Adam [Kingma and Ba, 2014] optimizer (β1 = 0.9, β2 = 0.999).