CW Complex Hypothesis for Image Data
Authors: Yi Wang, Zhiren Wang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We support the hypothesis by visualizing distributions of 2D families of synthetic image data, as well as by introducing a novel indicator function and testing it on natural image datasets. |
| Researcher Affiliation | Academia | 1Department of Mathematics, Johns Hopkins University 2Department of Mathematics, Pennsylvania State University. |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | We train three DDPM (Ho et al., 2020) models of the same architecture from the open source implementation (Vandegar, 2023) respectively on Shigh, Slow, and S. (This refers to a third-party implementation, not authors' own code release). |
| Open Datasets | Yes | Figures 3-5 compares Ik, at k = 100, for individual label classes in MNIST, FMNIST, SVHN, CIFAR-10, CIFAR-100 and Image Net... |
| Dataset Splits | No | In test, the classifier is 100% accurate on validation data from the Shigh and Slow, as well as on the separately generated data S high and S low. (This refers to validation data for a classifier used within the experiment, not for the main experimental setup or dataset splits to reproduce the primary findings.) |
| Hardware Specification | No | No specific hardware details (GPU/CPU models, memory, or specific computing environments) were mentioned for the experiments. |
| Software Dependencies | No | We train three DDPM (Ho et al., 2020) models of the same architecture from the open source implementation (Vandegar, 2023) respectively on Shigh, Slow, and S. (No specific version numbers for software dependencies are provided.) |
| Experiment Setup | Yes | We train three DDPM (Ho et al., 2020) models of the same architecture from the open source implementation (Vandegar, 2023) respectively on Shigh, Slow, and S. For fair comparison, number of SGD training steps is 50000 for all models, but batch size is doubled from 32 to 64 when training the model on S so that each image in Shigh is used approximately the same number of times when training on Shigh and S, and similar for Slow. |