ContraFeat: Contrasting Deep Features for Semantic Discovery

Authors: Xinqi Zhu, Chang Xu, Dacheng Tao

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, our models can obtain stateof-the-art semantic discovery results without relying on latent layer-wise manual selection, and these discovered semantics can be used to manipulate real-world images. We design two quantitative metrics to quantitatively evaluate the performance of semantic discovery methods on FFHQ dataset, and also show that disentangled representations can be derived via a simple training process.
Researcher Affiliation Academia Xinqi Zhu, Chang Xu, Dacheng Tao The University of Sydney, Australia xzhu7491@uni.sydney.edu.au, c.xu@sydney.edu.au, dacheng.tao@gmail.com
Pseudocode No The paper includes diagrams of the model architecture and processes (e.g., Figure 2, Figure 3), but it does not contain any formal pseudocode blocks or algorithm listings.
Open Source Code No The paper does not contain an explicit statement about the release of source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes We design two quantitative metrics on the FFHQ dataset (Karras, Laine, and Aila 2020) to evaluate the semantic discovery performance of models. ... We show the state-of-the-art comparison on 3DShapes in Table 4.
Dataset Splits No The paper mentions using "a set of N samples" for evaluation, and training models, but it does not specify explicit training, validation, or test dataset splits (e.g., percentages or counts) or reference standard predefined splits for its own training process.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No The paper does not list specific versions of software components, libraries, or frameworks used in the experiments (e.g., Python version, PyTorch version).
Experiment Setup No The paper describes the model components and loss functions but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings.