Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
FreeGen: Bridging Visual-Linguistic Discrepancies Towards Diffusion-based Pixel-level Data Synthesis
Authors: Wenzhuang Wang, Mingcan Ma, Yong Chen, Changqun Xia, Zhenbao Liang, Jia Li
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that the existing segmenters trained on Free Gen narrow the performance gap with real data counterparts and remarkably outperform the state-of-the-art methods. ... Experiments Datasets and Experimental Settings |
| Researcher Affiliation | Collaboration | 1State Key Laboratory of Virtual Reality Technology and Systems, SCSE, Beihang University 2 Geely Automobile Research Institute 3 Pengcheng Laboratory EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using narrative text and mathematical equations but does not include a dedicated pseudocode or algorithm block. |
| Open Source Code | No | The paper does not contain an explicit statement regarding the release of source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We conduct experiments on 3 benchmarks: VOC 2012 (Everingham et al. 2010), augmented with SBD (Hariharan et al. 2011) for 10.6k training and 1,449 validation images in 20 classes; COCO 2017 (Lin et al. 2014) with 118,287 training and 5k validation images in 80 classes; and Cityscapes (Cordts et al. 2016), an urban scene dataset with 2,975 training and 500 validation images in 19 classes. |
| Dataset Splits | Yes | We conduct experiments on 3 benchmarks: VOC 2012 (Everingham et al. 2010), augmented with SBD (Hariharan et al. 2011) for 10.6k training and 1,449 validation images in 20 classes; COCO 2017 (Lin et al. 2014) with 118,287 training and 5k validation images in 80 classes; and Cityscapes (Cordts et al. 2016), an urban scene dataset with 2,975 training and 500 validation images in 19 classes. |
| Hardware Specification | No | The paper mentions using Stable Diffusion V2.1 and U-Net, but does not provide specific hardware details such as GPU/CPU models or memory specifications used for running experiments. |
| Software Dependencies | Yes | We train Deep Lab V3 and Mask2former on synthetic data following MMSegmentation s default settings (Contributors 2020) and compare them with models trained on real data. Following (Nguyen et al. 2024), to enhance the variety of textual guidance, we adopt the image captioner BLIP (Li et al. 2022c) and Chat GPT (Achiam et al. 2023) to derive more text prompts for object categories. |
| Experiment Setup | Yes | Our Free Gen, based on the Stable Diffusion V2.1 pre-trained on LAION5B (Schuhmann et al. 2022), generates 512 512 image-mask pairs via 50 denoising steps. ...We train our ASR and EGD using the Adam optimizer with β1 = 0.9 and β2 = 0.999 for 20 epochs, where the batch size is 4 and the learning rate is 1e-4. |