Multi-Cross Sampling and Frequency-Division Reconstruction for Image Compressed Sensing
Authors: Heping Song, Jingyao Gong, Hongying Meng, Yuping Lai
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive CS experiments conducted on multiple benchmark datasets demonstrate that our MCFDNet outperforms state-of-the-art approaches, while also exhibiting superior noise robustness. |
| Researcher Affiliation | Academia | 1School of Computer Science and Communication Engineering, Jiangsu University, China 2Electronic and Electrical Engineering Department, Brunel University London, United Kingdom 3School of Cyberspace Security, Beijing University of Posts and Telecommunications, China |
| Pseudocode | No | The paper provides architectural diagrams and mathematical formulations but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at github.com/songhp/MCFD-Net. |
| Open Datasets | Yes | For training purposes, we selected 40,000 sub-images from the COCO 2014 dataset... The experimental results were evaluated on three benchmark datasets: Set11 (Kulkarni et al. 2016), BSDS100 (Martin et al. 2001), and Urban100 (Huang, Singh, and Ahuja 2015). |
| Dataset Splits | No | The paper mentions using "40,000 sub-images from the COCO 2014 dataset" for training and evaluating results on "Set11, BSDS100, and Urban100" but does not specify distinct training, validation, and test splits with percentages or counts, nor does it explicitly mention a validation set. |
| Hardware Specification | Yes | The CPU utilized was an Intel(R) Core(TM) i9-10980XE, and the GPU employed was an NVIDIA Ge Force RTX 3090. |
| Software Dependencies | No | The paper mentions using "Adam optimizer" but does not specify any software dependencies with version numbers, such as Python, PyTorch/TensorFlow, or CUDA versions. |
| Experiment Setup | Yes | For training purposes, we selected 40,000 sub-images from the COCO 2014 dataset, each with a size of 96 96. These sub-images were randomly cropped and flipped. To enhance computational efficiency and model robustness, we converted the images to the YCb Cr color space and utilized only the Y channel during both training and testing phases. ... During training, we used Adam optimizer (Kingma and Ba 2015) to update model parameters with momentum and weight decay set at 0.9 and 0.999 respectively. By conveniently stacking the DMCS Block in the sampling network, we trained six different sampling rates for our model: 50%, 25%, 12.5%, 6.25%, 3.125%,a nd 1.5625%. |