Deep Event Stereo Leveraged by Event-to-Image Translation
Authors: Soikat Hasan Ahmed, Hae Woong Jang, S M Nadim Uddin, Yong Ju Jung882-890
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results reveal that our method outperforms the state-of-the-art methods by significant margins both in quantitative and qualitative measures. |
| Researcher Affiliation | Academia | College of Information Technology Convergence, Gachon University, Seongnam, South Korea |
| Pseudocode | No | The paper describes the architecture and various sub-networks with detailed textual explanations and figures, but it does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We evaluate our proposed method on the Multi Vehicle Stereo Event Camera Dataset (MVSEC) (Zhu et al. 2018a). |
| Dataset Splits | Yes | In the split one, we train the model using 3110 samples from the Indoor Flying 2-3 and for the validation and test, we use 200 and 861 samples from the Indoor Flying 1 sequence, respectively. In the split three, we train the model with 2600 samples from the Indoor Flying 1-2 and for the validation and test, we use 200 and 1343 samples from the Indoor Flying 3, respectively. |
| Hardware Specification | Yes | A single NVIDIA TITAN XP GPU was used for the training. |
| Software Dependencies | No | The proposed deep event stereo network was implemented using Py Torch. |
| Experiment Setup | Yes | The model was trained in an end-to-end manner with the RMSprop optimizer using default settings. We trained the model up to 15 epoch and chose the best checkpoint based on the validation results for the testing. |