Learning Disentangled Representations of Videos with Missing Data
Authors: Armand Comas, Chi Zhang, Zlatan Feric, Octavia Camps, Rose Yu
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On a moving MNIST dataset with various missing scenarios, DIVE outperforms the state of the art baselines by a substantial margin. We also present comparisons on a real-world MOTSChallenge pedestrian dataset, which demonstrates the practical value of our method in a more realistic setting. |
| Researcher Affiliation | Academia | 1College of Electrical and Computer Engineering, 2 Khoury College of Computer Sciences, Northeastern University, MA, USA, 3Computer Science & Engineering, University of California San Diego, CA, USA. |
| Pseudocode | No | The paper does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and data can be found at https://github.com/Rose-STL-Lab/DIVE. |
| Open Datasets | Yes | We evaluate our method on variations of moving MNIST and MOTSChallenge multi-object tracking datasets. |
| Dataset Splits | No | The paper mentions training epochs and a test set size, but it does not specify a separate validation dataset split or its size. |
| Hardware Specification | No | The paper mentions "GPUs donated by NVIDIA" in the acknowledgments, but it does not provide specific details such as the model, quantity, or other hardware specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies, libraries, or frameworks used in the experiments. |
| Experiment Setup | Yes | We train the model for 300 epochs in scenarios 1 and 2, and 600 epochs in scenario 3. For implementation details for the experiments, please see Appendix A. |