Misspecified Phase Retrieval with Generative Priors

Authors: Zhaoqiang Liu, Xinshao Wang, Jiulong Liu

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on image datasets are performed to demonstrate that our approach performs on par with or even significantly outperforms several competing methods.
Researcher Affiliation Academia Zhaoqiang Liu National University of Singapore dcslizha@nus.edu.sg Xinshao Wang University of Oxford xinshao.wang@eng.ox.ac.uk Jiulong Liu Chinese Academy of Sciences jiulongliu@lsec.cc.ac.cn
Pseudocode Yes Algorithm 1 A two-step approach for misspecified phase retrieval with generative priors Input: {(ai, yi)}m i=1, step size ζ > 0, number of iterations T1 for the first step, number of iterations T2 for the second step, pre-trained generative model G, initial vector w(0) First step: 1: for t = 0, 1, . . . , T1 1 do 2: w(t+1) = PG Vw(t) 3: end for Second step: Let x(0) := w(T1) 1: for t = 0, 1, . . . , T2 1 do 2: Calculate ˆν(t), y(t) i , x(t+1) and x(t+1) according to (13), (14), (15), and (16), respectively 3: end for Output: ˆx := x(T2)
Open Source Code Yes 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] The code is included in the supplementary material.
Open Datasets Yes The MNIST dataset contains 60, 000 images of handwritten digits. The size of each image is 28 28, and thus the dimension of the image vector is n = 784. For the MNIST dataset, the generative model G is set to be (the normalized version of) a pre-trained variational autoencoder (VAE) model with the latent dimension being k = 20. We make use of the VAE model pre-trained by the authors of [6] directly. [...] Additional results for the MNIST dataset and some experimental results for the Celeb A [53] dataset are presented in the supplementary material.
Dataset Splits No The VAE model is trained by the Adam optimizer with a mini-batch size 100 and a learning rate of 0.001, and is trained from the images in the training set. The projection step PG( ) (cf. (11)) is approximated by the Adam optimizer with a learning rate of 0.1 and 120 steps. We report the results on 10 testing images that are selected from the test set.
Hardware Specification Yes All experiments are run using Python 3.6 and Tensor Flow 1.5.0, with a NVIDIA Ge Force GTX 1080 Ti 11GB GPU.
Software Dependencies Yes All experiments are run using Python 3.6 and Tensor Flow 1.5.0, with a NVIDIA Ge Force GTX 1080 Ti 11GB GPU.
Experiment Setup Yes For Algorithm 1, we set T1 = 20 and T2 = 30. As mentioned in Section 3, the starting point w(0) is set to be the column of 1/m sum(yi aiaT i (i.e., a shifted version of V defined in (10)) that corresponds to the largest diagonal entry. In addition, as mentioned in Remark 6, we set the step size ζ as ζ = 1/ˆν(t) (cf. (13)) in the t-th iteration of the second step of Algorithm 1. [...] We follow [37] to set τ = 0.9. For a fair comparison, we use the vector produced after T1 = 20 iterations of the first step of Algorithm 1 as the initialization vector of APPGD, and then we run APPGD for T2 = 30 iterations.