Learning Light Field Angular Super-Resolution via a Geometry-Aware Network
Authors: Jing Jin, Junhui Hou, Hui Yuan, Sam Kwong11141-11148
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results over various light field datasets including large baseline light field images demonstrate the significant superiority of our method when compared with state-of-the-art ones, i.e., our method improves the PSNR of the second best method up to 2 d B in average, while saves the execution time 48 . In addition, our method preserves the light field parallax structure better. |
| Researcher Affiliation | Academia | 1City University of Hong Kong, 2Shandong University |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The codes are available at https://github.com/jingjin25/LFASR-geometry. |
| Open Datasets | Yes | The dataset used for training consists of 20 scenes from HCI dataset (Honauer et al. 2016). All images have the spatial resolution of 512 512, and the disparity range of [ 4, 4]. ...To evaluate the performance of different methods on inputs with large baselines, 3 datasets containing totally 48 light fields with a disparity range of [ 4, 4] were used, namely, HCI (Honauer et al. 2016), HCI old (Wanner, Meister, and Goldluecke 2013) and Inria DLFD (Shi, Jiang, and Guillemot 2019). |
| Dataset Splits | No | The paper mentions datasets used for training and testing, but does not specify the train/validation/test splits or proportions (e.g., 80/10/10) for the training data. |
| Hardware Specification | Yes | All methods were evaluated on a Intel 3.70 GHz desktop with 32 GB RAM and a Ge Force RTX 2080 Ti GPU. |
| Software Dependencies | No | The paper states, "The model was implemented with Py Torch." However, it does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We used Adam optimizer (Kingma and Ba 2014) with β1 = 0.9 and β2 = 0.999. The learning rate was set to 1e 4 initially and decreased by a factor of 0.5 every 5e3 epochs. |