Phase Retrieval Under a Generative Prior

Authors: Paul Hand, Oscar Leong, Vlad Voroninski

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We corroborate these results with experiments showing that exploiting generative models in phase retrieval tasks outperforms both sparse and general phase retrieval methods. In this section, we investigate the use of enforcing generative priors in phase retrieval tasks. We compared our results with the sparse truncated amplitude flow algorithm (SPARTA) [35] and three popular general phase retrieval methods: Fienup [15], Gerchberg Saxton [16], and Wirtinger Flow [8].
Researcher Affiliation Collaboration Paul Hand Northeastern University p.hand@northeastern.edu Oscar Leong Rice University oscar.f.leong@rice.edu Vladislav Voroninski Helm.ai vlad@helm.ai
Pseudocode Yes Algorithm 1 Deep Phase Retrieval (DPR) Gradient method
Open Source Code No A MATLAB implementation of the SPARTA algorithm was made publicly available by the authors at https://gangwg.github.io/SPARTA/. (This refers to a baseline, not their own method's code). The paper does not provide concrete access to the source code for their own described methodology.
Open Datasets Yes In the first image experiment, we used a pretrained Variational Autoencoder (VAE) from [4] that was trained on the MNIST dataset [24]. In the second experiment, we used a pretrained Deep Convolutional Generative Adversarial Network (DCGAN) from [4] that was trained on the Celeb A dataset [27].
Dataset Splits No The paper mentions using the MNIST and Celeb A datasets, and refers to a 'DCGAN's test set', but it does not provide specific details on the training, validation, or test splits (e.g., percentages or sample counts) used for its experiments.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU or CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions 'A MATLAB implementation' and using 'the Adam optimizer' and 'Phase Pack', but it does not provide specific version numbers for any of these software dependencies.
Experiment Setup No The paper states details such as running 'Algorithm 1 for 25 random instances', using 'Adam optimizer', and allowing '10 random restarts', but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) for the optimization process.