Event-Image Fusion Stereo Using Cross-Modality Feature Propagation

Authors: Hoonhee Cho, Kuk-Jin Yoon454-462

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate our method, we conducted experiments using synthetic and real-world datasets. ... Experiments and Results ... Qualitative and Quantitative Results ... Ablation Studies
Researcher Affiliation Academia Hoonhee Cho and Kuk-Jin Yoon Visual Intelligence Lab., KAIST, Daejeon, South Korea {gnsgnsgml, kjyoon}@kaist.ac.kr
Pseudocode No The paper describes its methods using text and diagrams (Figure 1, 2, 3), but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions that 'only PSN (Tulyakov et al. 2019) with published codes was trained', referring to a third-party's code. There is no statement or link indicating that the authors' own code for the described methodology is open source or publicly available.
Open Datasets Yes We used two different datasets for the performance evaluation. One dataset is the MVSEC (Zhu et al. 2018) of actual event data, and the other is the simulated dataset that we generated in this work. ...Our synthetic dataset was generated using a 3D computer graphics software called Blender (Community 2018).
Dataset Splits Yes We split the data into 9,000 samples for training, 200 samples for validation, and 2,000 samples for the test set.
Hardware Specification Yes We adopted a single NVIDIA TITAN RTX GPU for training and inference.
Software Dependencies No The paper mentions using the RMSprop optimizer, but does not provide specific version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages (e.g., Python).
Experiment Setup Yes The coefficients of Eq. 8 were set to λ0 = 0.5, λ1 = 0.5, λ2 = 0.7, λ3 = 1.0. Similarly, the coefficients of Eq. 9 were set to λ4 = 0.5, λ5 = 0.5, λ6 = 0.7, λ7 = 1.0. For comparison, we trained both our networks and other models using the RMSprop optimizer. ...models with the best performance in the validation set were selected among those trained for up to 30 epochs until convergence.