ProgressiveMotionSeg: Mutually Reinforced Framework for Event-Based Motion Segmentation
Authors: Jinze Chen, Yang Wang, Yang Cao, Feng Wu, Zheng-Jun Zha303-311
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on both synthetic and real datasets demonstrate the superiority of our proposed approaches against the State-Of-The-Art (SOTA) methods. |
| Researcher Affiliation | Academia | Jinze Chen*1, Yang Wang*1, Yang Cao 1,2, Feng Wu1, Zheng-Jun Zha 1 1University of Science and Technology of China, 2Institute of Artificial Intelligence, Hefei Comprehensive National Science Center |
| Pseudocode | No | The paper includes mathematical formulations and descriptions of processes, but it does not contain any explicitly labeled "Pseudocode" or "Algorithm" blocks. |
| Open Source Code | No | The paper does not explicitly state that source code for the described methodology is available or provide a link to a code repository. |
| Open Datasets | Yes | Synthetic DVS Data. The synthetic DVS data are generated using ESIM (Rebecq, Gehrig, and Scaramuzza 2018) on simulated scenes... Real-world DVS Data. The Extreme Event Dataset (EED) (Mitrokhin et al. 2018) is a real-world event segmentation benchmark... The EV-IMO dataset (Mitrokhin et al. 2019) is an object-level motion segmentation dataset... |
| Dataset Splits | No | The paper uses synthetic and real-world datasets for evaluation but does not provide specific training, validation, or test dataset splits (e.g., percentages or exact sample counts) for reproducibility. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware (e.g., CPU, GPU models, memory) used to conduct the experiments. |
| Software Dependencies | No | The paper mentions tools used for data generation like "blender (Community 2018)" and "ESIM (Rebecq, Gehrig, and Scaramuzza 2018)" but does not provide specific version numbers for software dependencies (e.g., programming languages, libraries, frameworks) relevant to their implementation. |
| Experiment Setup | Yes | The contrast threshold is 0.5. Then, we add Gaussian noise as in (Patrick, Posch, and Delbruck 2008). To thoroughly verify the robustness of the proposed method, we set five kinds of noise levels (n {0.05, 0.10, 0.15, 0.20, 0.25}) for each simulated scene. |