Attention-Based View Selection Networks for Light-Field Disparity Estimation

Authors: Yu-Ju Tsai, Yu-Lun Liu, Ming Ouhyoung, Yung-Yu Chuang12095-12103

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that the proposed method achieves state-of-the-art performance in terms of accuracy and ranks the first on a popular benchmark for disparity estimation for light field images. In this section, we first introduce the datasets we used for training and evaluation. We then describe the implementation details. Finally, both quantitative and qualitative results are reported and compared with the state-of-the-art methods, along with the ablation study, discussions and limitations.
Researcher Affiliation Collaboration Yu-Ju Tsai,1 Yu-Lun Liu,1,2 Ming Ouhyoung,1 Yung-Yu Chuang1 1National Taiwan University, 2Media Tek {r06922009, yulunliu}@cmlab.csie.ntu.edu.tw, {ming, cyy}@csie.ntu.edu.tw
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide an explicit link to open-source code for the described methodology. It mentions submitting results to a benchmark website, but not the code itself.
Open Datasets Yes We use two datasets in our experiments, the 4D Light Field Dataset (Honauer et al. 2016) and a dataset released by (Alperovich et al. 2018). 4D Light Field Dataset (Honauer et al. 2016). This dataset is often used as the benchmark for evaluating disparity estimation methods for light field images. Dataset released by (Alperovich et al. 2018). This dataset is also rendered using Blender with the same resolution and number of views as the 4D Light Field Dataset.
Dataset Splits Yes In our experiment setting, we use 16 scenes in Additional for training, 8 scenes from Stratified and Training for validating and 4 scenes from Test for testing. In our experiment setting, we choose 100 scenes for training and 21 scenes for validation and testing.
Hardware Specification Yes Training took about one week on an NVIDIA GTX 1080Ti GPU.
Software Dependencies No The method is implemented using Keras with Tensor Flow as the backend. No version numbers are provided for Keras or TensorFlow.
Experiment Setup Yes For training the network, given the predicted disparity map ˆd, the ground-truth disparity map d, and corresponding exclusion mask M, we use Adam optimizer (Kingma and Ba 2014) to minimize the following L1 loss... The following parameters are set for training: the batch size is 12 and the learning rate is 1e-3.