Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Using Skew to Assess the Quality of GAN-generated Image Features

Authors: Lorenzo Luzi, Helen Jenne, Carlos Ortiz Marrero, Ryan Murray

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our numerical experiments support that SID either tracks with FID or, in some cases, aligns more closely with human perception when evaluating image features of Image Net data. Our work also shows that principal component analysis can be used to speed up the computation time of both FID and SID.
Researcher Affiliation Academia Lorenzo Luzi EMAIL Rice University Pacific Northwest National Laboratory Helen Jenne EMAIL Pacific Northwest National Laboratory Carlos Ortiz Marrero EMAIL Pacific Northwest National Laboratory North Carolina State University Ryan Murray EMAIL North Carolina State University
Pseudocode No The paper describes the method using mathematical formulations and descriptive text, but no explicit pseudocode or algorithm blocks are provided.
Open Source Code No The paper does not contain an explicit statement about releasing the source code for the proposed Skew Inception Distance (SID) methodology. While it references a repository for pretrained diffusion models used in experiments, it does not provide access to its own implementation.
Open Datasets Yes Our numerical experiments support that SID either tracks with FID or, in some cases, aligns more closely with human perception when evaluating image features of Image Net data. Additionally, to explore SID s applicability to different types of generative approaches, we studied SID using data from diffusion models, using pretrained diffusion models that were trained on Stanford Cars, Celeb A (Liu et al., 2015), AFHQ, and the Flowers dataset (Tung, 2020) to generate images.
Dataset Splits Yes All experiments are done using Inception-v3 as a feature extractor on the entire (50K) Image Net validation set.
Hardware Specification No The paper mentions using 'GPU' and 'CPU' for calculations and performance comparisons in Table 2, but it does not specify any particular models or detailed hardware specifications (e.g., 'NVIDIA A100', 'Intel Xeon').
Software Dependencies No The paper mentions 'scipy.linalg.sqrtm function' but does not provide specific version numbers for Python, SciPy, or any other software libraries used for implementation.
Experiment Setup Yes We saw that values of α = 10,000 and m = 150 seemed to work well in the examples we tested. 1. Gaussian noise: We added Gaussian noise with σ {0, 0.001, 0.005, 0.01, . . . , 0.1}. 2. Salt and pepper noise: We changed a proportion p of the pixels in the image to black or white (with equal probability) for p {0, 0.5%, 1.0%, 1.5%, . . . , 20%}. 3. Gaussian blur: We convolved the image with a Gaussian kernel with standard deviation σ {0.1, 0.2, 0.3, . . . , 1.0}. 4. Adding black rectangles as occlusions: We added five rectangles at randomly chosen locations, where the scale increases with scale parameter s {1%, 2%, 3%, . . . , 20%}.