Block Image Compressive Sensing with Local and Global Information Interaction

Authors: Xiaoyu Kong, Yongyong Chen, Feng Zheng, Zhenyu He

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show our BRBCN method outperforms existing stateof-the-art methods by a large margin.
Researcher Affiliation Academia 1 Harbin Institute of Technology (Shenzhen) 2 Southern University of Science and Technology
Pseudocode No The paper describes the proposed method in prose and provides diagrams, but it does not include a structured pseudocode or algorithm block.
Open Source Code Yes The code is available at https://github.com/XYkong-CS/BRBCN
Open Datasets Yes We use Image Net to train BRBCN and all images are converted to gray-scale and resized into 256 256. (...) Two gray-scale datasets including Set14 (14 images) (Zeyde, Elad, and Protter 2010) and BSD68 (68 images) (Sapiro 2008) and one color dataset Waterloo (4744 images) (Ma et al. 2016) are used.
Dataset Splits No The paper mentions using specific datasets for training and testing but does not explicitly provide information about training/validation/test dataset splits (e.g., percentages or counts for each split).
Hardware Specification Yes We implement the model using Py Torch, and train and test it on Nvidia RTX 3090 GPU.
Software Dependencies No The paper mentions 'Py Torch' as the software used for implementation but does not provide a specific version number for it or any other software dependencies.
Experiment Setup Yes The training epochs and batch size are three and eight, respectively. The Adam optimization strategy is applied, with a learning rate of 10 4 for the early two epochs and then reduced to 10 5 for the last epoch. Five sampling ratios are investigated, including low ratios 0.01 and 0.04, middle ratios 0.1 and 0.25, and a higher ratio 0.5. The default iteration time K and block size B are set to be 8 and 32. The tokens dimension CT is set as 128.