CoCoG: Controllable Visual Stimuli Generation Based on Human Concept Representations

Authors: Chen Wei, Jiachen Zou, Dietmar Heinke, Quanying Liu

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiments with Co Co G indicate that 1) the reliable concept embeddings in Co Co G allows to predict human behavior with 64.07% accuracy in the THINGS-similarity dataset; 2) Co Co G can generate diverse objects through the control of concepts; 3) Co Co G can manipulate human similarity judgment behavior by intervening key concepts.
Researcher Affiliation Academia 1Southern University of Science and Technology, Shenzhen, China 2University of Birmingham, Birmingham, United Kingdom {weic3, zoujc2022}@mail.sustech.edu.cn, d.g.heinke@bham.ac.uk, liuqy@sustech.edu.cn
Pseudocode No The paper describes the method verbally and with diagrams, but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes The code of Co Co G is available at https://github.com/ncclab-sustech/Co Co G.
Open Datasets Yes We used the triplet odd-one-out similarity judgment task in the THINGS dataset [Hebart et al., 2023].
Dataset Splits No The paper mentions using the THINGS Odd-one-out dataset to train and validate the concept encoder, but it does not provide specific details on the train/validation/test splits, such as percentages, sample counts, or explicit references to predefined splits in the main text.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using 'CLIP image encoder', 'pre-trained SDXL and IP-Adapter models', and 'U-Net', but it does not provide specific version numbers for any of these software dependencies.
Experiment Setup No The paper states 'Specific training parameters are shown in the Appendix,' indicating that these details are not provided in the main body of the paper.