ComGAN: Unsupervised Disentanglement and Segmentation via Image Composition

Authors: Rui Ding, Kehua Guo, Xiangyuan Zhu, Zheng Wu, Liwei Wang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that (i) Com GAN s network architecture effectively avoids trivial solutions without any supervised information and regularization; (ii) DS-Com GAN achieves remarkable results and outperforms existing semi-supervised and weakly supervised methods by a large margin in both the image disentanglement and unsupervised segmentation tasks. It implies that the redesign of Com GAN is a possible direction for future unsupervised work.1
Researcher Affiliation Academia Rui Ding Central South University ruiding@csu.edu.cn Kehua Guo Central South University guokehua@csu.edu.cn Xiangyuan Zhu Central South University zhuxiangyuan@csu.edu.cn Zheng Wu Central South University wuzhenghuse@gmail.com Liwei Wang Central South University wang.liwei@csu.edu.cn
Pseudocode No No pseudocode or algorithm blocks (e.g., clearly labeled 'Algorithm' or 'Pseudocode' sections) were found in the paper.
Open Source Code Yes Corresponding author 1Code and data are available at https://github.com/Ruiding1/Com GAN
Open Datasets Yes The experiments are conducted on five fine-grained image datasets and a multi-object dataset: CUB [39], FS-100 [40], Stanford-Cars [41]. Stanford-Dogs [41], Flowers [42], CLEVR6 [43].
Dataset Splits No No explicit details on validation dataset splits (e.g., specific percentages or sample counts for a validation set) were provided in the main text of the paper.
Hardware Specification No No specific hardware details (e.g., GPU models, CPU types, or cloud computing instances with specifications) used for running experiments were mentioned in the paper.
Software Dependencies No No specific software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8') were provided in the paper.
Experiment Setup No The paper mentions loss functions and the hyperparameter β (with details in the Appendix C.1) and λ (implicitly in text), but does not provide concrete numerical values for core training hyperparameters such as learning rate, batch size, or number of epochs in the main text.