Deep Fractional Fourier Transform

Authors: Hu Yu, Jie Huang, Lingzhi LI, man zhou, Feng Zhao

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally evaluate the MFRFC on various computer vision tasks, including object detection, image classification, guided super-resolution, denoising, dehazing, deraining, and low-light enhancement. Our proposed MFRFC consistently outperforms baseline methods by significant margins across all tasks.
Researcher Affiliation Collaboration University of Science and Technology of China 2Alibaba Group
Pseudocode No The paper includes architectural diagrams (e.g., Figure 3) but no formal pseudocode or algorithm blocks.
Open Source Code Yes Our code is released publicly at https://github.com/yuhuUSTC/FRFT.
Open Datasets Yes Following [29], the PASCAL VOC 2007 and 2012 training sets [30] are adopted for training... We verify our method on the image classification task with the CIFAR-10 dataset [31]. ...to evaluate our method on the image de-noising task, we employ the widely-used SIDD dataset as training benchmark. ...We verify our method on the image enhancement task with the commonly used dataset, LOL [36]. ...to evaluate our method on the image dehazing task, we employ ITS and SOTS indoor [40] as our training and testing datasets. For validation, we used the widely-used standard benchmark dataset Rain200H, as described in [43].
Dataset Splits No The paper mentions using "validation samples from the SIDD dataset" but does not provide specific split percentages or counts for training/validation/test sets across all experiments.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. It only mentions support from a "GPU cluster".
Software Dependencies No The paper does not provide specific version numbers for software dependencies (e.g., Python, PyTorch, TensorFlow, specific libraries).
Experiment Setup No The paper states, "We re-implement the baseline methods following the settings in the corresponding paper," but does not provide specific hyperparameter values or detailed training configurations for their own method or the re-implemented baselines.