Convolutional Phase Retrieval

Authors: Qing Qu, Yuqian Zhang, Yonina Eldar, John Wright

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments: First, we investigate the dependence of the sample complexity m on Cx . We assume the ground truth x CSn 1, and consider three cases: (1) x = e1 with Cx = 1, where e1 the standard basis vector; (2) x is uniformly random generated from CSn 1; (3) x = 1 n1, with Cx = n. For each case, we fix the signal length n = 1000 and vary the ratio m/n. For each ratio m/n, we randomly generate the kernel a CN(0, I) and repeat the experiment for 100 times. We initialize the algorithm by the spectral method [29, Algorithm 1] and run the gradient descent (8). Given the algorithm output bx, we judge the success of recovery by infϕ [0,2π) bx xeiϕ ϵ, where ϵ = 10 5. From Fig. 2, we can see that the larger the Cx , the more samples are needed for exact recovery.
Researcher Affiliation Academia Qing Qu Columbia University qq2105@columbia.edu Yuqian Zhang Columbia University yz2409@columbia.edu Yonina C. Eldar Technion yonina@ee.technion.ac.il John Wright Columbia University jw2966@columbia.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets No The paper mentions using an image dataset ("an image of size 200 300") and an antenna pattern data ("antenna pattern a C361 obtained from Bell labs") but does not provide any concrete access information (link, DOI, repository, or formal citation with author/year) to these datasets for public access.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning for training, validation, and testing.
Hardware Specification No The paper mentions: "Experiment using general Gaussian measurements A Cm n could easily run out of memory on a personal computer for problems of this size." This implies experiments were run on a personal computer, but it lacks specific hardware details such as exact CPU/GPU models, processor types, or memory amounts.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes For each case, we fix the signal length n = 1000 and vary the ratio m/n. For each ratio m/n, we randomly generate the kernel a CN(0, I) and repeat the experiment for 100 times. We initialize the algorithm by the spectral method [29, Algorithm 1] and run the gradient descent (8). Given the algorithm output bx, we judge the success of recovery by infϕ [0,2π) bx xeiϕ ϵ, where ϵ = 10 5. We run power method for 100 iterations for initialization, and stop the algorithm once the error is smaller than 1 10 4.