Knowledge-Guided Object Discovery with Acquired Deep Impressions
Authors: Jinyang Yuan, Bin Li, Xiangyang Xue10798-10806
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results suggest that the proposed framework is able to effectively utilize the acquired impressions and improve the scene decomposition performance. |
| Researcher Affiliation | Academia | Jinyang Yuan, Bin Li*, Xiangyang Xue Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University {yuanjinyang, libin, xyxue}@fudan.edu.cn |
| Pseudocode | No | The paper describes the generative model and learning procedure in text and with diagrams, but it does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/jinyangyuan/acquireddeep-impressions. |
| Open Datasets | Yes | images are composed of 70,000 variants of handwritten digits 0 9 in the MNIST dataset (Le Cun et al. 1998). In the second type of datasets, images are composed of 70 variants of boys and girls as well as 56 other types of abstract objects provided by the Abstract Scene Dataset (Zitnick and Parikh 2013; Zitnick, Parikh, and Vanderwende 2013). |
| Dataset Splits | No | The paper describes the composition and sizes of the Dsingle and Dmulti datasets for training and 10,000 images for evaluation, but it does not specify a separate validation split or how cross-validation was performed. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for the experiments, such as GPU models or CPU specifications. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | No | Details of the adaptation and choices of hyperparameters are provided in the supplementary material. The main paper refers to supplementary material for these details rather than providing them explicitly. |