One-sample Guided Object Representation Disassembling

Authors: Zunlei Feng, Yongming He, Xinchao Wang, Xin Gao, Jie Lei, Cheng Jin, Mingli Song

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that the One GORD achieves competitive dissembling performance and can handle natural scenes with complicated backgrounds. To compare different methods quantitatively, we adopt the Modularity Score and Integrity Score (Section 4) to measure the disassembling performance of our methods with S-AE, DSD [11], MONet [6] and IODINE [12]. In the experiments, T and D are set to 10 and 100, respectively. We sample 5 kinds of representation length ({10, 20, 30, 40, 50}) and test all methods in those length setting. Table 1 gives the average modularity score (AMS) and average integrity score (AIS) on the SVHN dataset (the first three rows) and the CIFA-10 dataset (the last two rows).
Researcher Affiliation Collaboration Zunlei Feng Zhejiang University zunleifeng@zju.edu.cn Yongming He Zhejiang University yongminghe@zju.edu.cn Xinchao Wang Stevens Institute of Technology xinchao.wang@stevens.edu Xin Gao Alibaba Group zimu.gx@alibaba-inc.com Jie Lei Zhejiang University of Technology jasonlei@zjut.edu.cn Cheng Jin Fudan University jc@fudan.edu.cn Mingli Song Zhejiang University brooksong@zju.edu.cn
Pseudocode No The paper includes a figure illustrating the architecture but does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to its own source code for the methodology described. It refers to a third-party GitHub link for ResNet2 architecture, not their specific implementation.
Open Datasets Yes To verify the effectiveness of the proposed One-GORD, we adopt five datasets: SVHN [29], CIFAR-10 [3], COCO [19], Mugshot [11], and mini-Image Net [24], which are composed of different objects and complex backgrounds.
Dataset Splits No The training and testing sample numbers are (20000,1000), (20000,1000), (40000,1000), (30000,1000), (10000,1000) for SVHN, CIFAR-10, COCO, Mugshot, and mini Image Net, respectively. While training and testing sample numbers are provided, there is no explicit mention of a separate validation split or its size.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running its experiments.
Software Dependencies No The paper mentions 'The Adam algorithm is adopted' and references 'Sklearn.svm. http://scikit-learn.sourceforge.net/stable/modules/generated/sklearn.svm.SVC.html', but does not specify version numbers for any software dependencies or libraries.
Experiment Setup Yes The Adam algorithm is adopted. The learning rate is set to 0.0005. In the experiment, the balance parameters τ, α, γ, η, λ are set to 1, and β is set to 10, ρ is set to 1000, and δ is set to 5. Through large experiments, we find that the crucial parameter are β, ρ and δ. Tuning β, ρ and δ may lead to better performance under the condition that τ, α, γ, η, λ are set to 1.