High Fidelity GAN Inversion via Prior Multi-Subspace Feature Composition
Authors: Guanyue Li, Qianfen Jiao, Sheng Qian, Si Wu, Hau-San Wong8366-8374
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the experiments, the superior performance of Pm SFC demonstrates the effectiveness of prior subspaces in facilitating GAN inversion together with extended applications in visual manipulation. |
| Researcher Affiliation | Collaboration | 1School of Computer Science and Engineering, South China University of Technology, Guangzhou, P. R. China 2Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong 3Huawei Device Company Limited, Shenzhen, P. R. China |
| Pseudocode | Yes | Algorithm 1 Pseudo-code of subspace discovery over GAN s intermediate feature channels. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Extensive experiments are conducted to evaluate the proposed Pm SFC model on a variety of datasets, including Celeb A-HQ (Karras et al. 2018) and LSUN (Yu et al. 2015). These datasets are widely used for image synthesis. |
| Dataset Splits | No | The paper mentions training and testing data, but does not explicitly describe a validation dataset split or a validation process with specific details. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU models or CPU specifications. |
| Software Dependencies | No | The paper mentions the use of 'Adam optimizer (Kingma and Ba 2015)' but does not specify software dependencies with version numbers (e.g., Python, specific libraries like PyTorch, TensorFlow, or scikit-learn versions). |
| Experiment Setup | Yes | For prior subspace discovery, the size of full connection in the self-expressive layer is 512 512. We train the layer for 3000 iterations using the Adam optimizer (Kingma and Ba 2015) with learning rate of 0.0001 and momentum parameters (0.9, 0.999). To reach a balance among the terms in the overall training loss, the weighting factors λ and µ in Eq.(5) are set to 10 and 1, respectively. For the inversion and extended tasks, the settings are the same as above, but the number of iterations increases to 7000. |