Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
ACFun: Abstract-Concrete Fusion Facial Stylization
Authors: Jiapeng Ji, Kun Wei, Ziqi Zhang, Cheng Deng
NeurIPS 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct experiments using images collected from the Internet and provide visual comparisons. |
| Researcher Affiliation | Academia | Jiapeng Ji, Kun Wei , Ziqi Zhang, Cheng Deng School of Electronic Engineering, Xidian University Xi an 710071, China EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper includes architectural diagrams (Figure 2 and Figure 3) and descriptive text for its components, but it does not contain any formal pseudocode blocks or algorithms labeled as such. |
| Open Source Code | No | The paper does not provide open access to the data and code due to proprietary restrictions. However, detailed descriptions of the experimental setup and model architecture are provided in Sections 3 and 4, ensuring the experiments can be understood and replicated by researchers with similar resources. |
| Open Datasets | No | We conduct experiments using images collected from the Internet and provide visual comparisons. |
| Dataset Splits | No | The paper mentions training on 'a single pair of images' and evaluates performance, but it does not specify explicit train/validation/test splits with percentages or sample counts for the datasets used in its experiments. |
| Hardware Specification | Yes | We trained on a single Nvidia A6000 graphics card, and in the case of a single pair of images, we set the batch size to 1. |
| Software Dependencies | No | The paper mentions using 'Stable Diffusion' as its backbone model (e.g., SD1.5, SDXL, SD1.4) and 'CLIP' for encoding, but it does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | We set the base learning rate to 1.0e 04, and the remaining hyperparameters are consistent with Stable Diffusion without changing. Through 40 steps of diffusion, our method can obtain stylized facial images with good results. We set the hyperparameters γ and β to 0.8 and 1.0, respectively, and all subsequent experiments will use this hyperparameter setting method. |