Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Detail-Preserving Transformer for Light Field Image Super-resolution
Authors: Shunzhou Wang, Tianfei Zhou, Yao Lu, Huijun Di2522-2530
AAAI 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluations are conducted on a number of light ๏ฌeld datasets, including real-world scenes and synthetic data. The proposed method achieves superior performance comparing with other state-of-the-art schemes. |
| Researcher Affiliation | Academia | 1 Beijing Key Laboratory of Intelligent Information Technology, School of Computer Science and Technology, Beijing Institute of Technology, China 2 Computer Vision Laboratory, ETH Zurich, Switzerland |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is publicly available at: https://github.com/BITszwang/DPT. |
| Open Datasets | Yes | We conduct extensive experiments on ๏ฌve popular LFSR benchmarks, i.e., EPFL (Rerabek and Ebrahimi 2016), HCInew (Honauer et al. 2016), HCIold (Wanner, Meister, and Goldluecke 2013), INRIA (Le Pendu, Jiang, and Guillemot 2018), and STFgantry (Vaish and Adams 2008). |
| Dataset Splits | No | The paper mentions 'training stage' and 'testing dataset' but does not specify explicit train/validation/test splits with percentages or counts for reproducibility. It implies a test set but no detailed split information for validation. |
| Hardware Specification | Yes | All experiments are carried out on a single Tesla V100 GPU card. |
| Software Dependencies | No | The paper mentions using the 'Adam optimizer' and 'โ1 loss' but does not specify any software names or versions (e.g., PyTorch 1.9, CUDA 11.1). |
| Experiment Setup | Yes | The โ1 loss is used to optimize our network. We use the Adam optimizer to train our network, with a batch size of 8. The initial learning rate is set to 2 10 4 and it will be halved every 15 epochs. We train the network for 75 epochs in total. |