pcaGAN: Improving Posterior-Sampling cGANs via Principal Component Regularization
Authors: Matthew Bendel, Rizwan Ahmad, Philip Schniter
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments demonstrate that our method outperforms contemporary c GANs and diffusion models in imaging inverse problems like denoising, large-scale inpainting, and accelerated MRI recovery. |
| Researcher Affiliation | Academia | Matthew C. Bendel Dept. ECE The Ohio State University Columbus, OH 43210 bendel.8@osu.edu Rizwan Ahmad Dept. BME The Ohio State University Columbus, OH 43210 ahmad.46@osu.edu Philip Schniter Dept. ECE The Ohio State University Columbus, OH 43210 schniter.1@osu.edu |
| Pseudocode | Yes | Algorithm 1 details our proposed approach to training the pca GAN. In particular, it describes the steps used to perform a single update of the generator parameters θ based on the training batch {(xb, yb)}B b=1. Before diving into the details, we offer a brief summary of Algorithm 1. |
| Open Source Code | Yes | The code for our model can be found here: https://github.com/matt-bendel/pca GAN. |
| Open Datasets | Yes | We randomly split the MNIST training fold into 50 000 training and 10 000 validation images, and we use the entire MNIST fold set for testing. |
| Dataset Splits | Yes | For each d, we generate 70 000 training, 20 000 validation, and 10 000 test samples. |
| Hardware Specification | Yes | Running Py Torch on a server with 4 Tesla A100 GPUs, each with 82 GB of memory, the c GAN training for d = 100 takes approximately 8 hours, with training time decreasing with smaller d. |
| Software Dependencies | No | Running Py Torch on a server with 4 Tesla A100 GPUs, each with 82 GB of memory... |
| Experiment Setup | Yes | In each experiment, all c GANs were trained using the Adam optimizer with a learning rate of 10 3, β1 = 0, and β2 = 0.99 as in [55]. We choose βadv = 10 5, nbatch = 64, Prc = 2, and train for 100 epochs for both rc GAN and pca GAN. ... For pca GAN, we choose K = d for each d in this experiment (unless otherwise noted) and βpca = 10 2. |