Generative Modeling of Infinite Occluded Objects for Compositional Scene Representation
Authors: Jinyang Yuan, Bin Li, Xiangyang Xue
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on a series of specially designed datasets demonstrate that the proposed method outperforms two state-of-the-art methods when object occlusions exist. |
| Researcher Affiliation | Academia | 1Shanghai Key Laboratory of Intelligent Information Processing; Fudan-Qiniu Joint Laboratory for Deep Learning; Shanghai Institute of Intelligent Electronics & Systems; School of Computer Science, Fudan University, China. |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any information about open-source code for the described methodology. |
| Open Datasets | Yes | The perceptual grouping performance of the compared methods are evaluated on a series of datasets derived from the publicly released datasets provided by (Greff et al., 2016b;a; 2017). The size of images in all datasets is 48 48, and each image may contain 2 4 binary hollow shapes (referred as Shapes) or real-valued handwritten digits (referred as MNIST). |
| Dataset Splits | Yes | In all datasets, 50,000, 10,000, and 10,000 images are used for training, validation, and test, respectively. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies. |
| Experiment Setup | Yes | All three methods are trained on images containing 2 or 3 objects with K =4. |