Exponential Spectral Pursuit: An Effective Initialization Method for Sparse Phase Retrieval

Authors: Mengchu Xu, Yuxuan Zhang, Jian Wang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct numerical experiments to evaluate the performance of ESP. To compare the performance of different methods, we introduce two metrics: i) relative error and ii) fraction of recovered support.
Researcher Affiliation Academia 1School of Data Science, Fudan University, Shanghai, China. Correspondence to: Jian Wang <jian wang@fudan.edu.cn>.
Pseudocode Yes Algorithm 1 Exponential Spectral Pursuit Input: sparsity k, samples y, and sampling matrix A. Step 1: Search an index imax corresponding to the largest diagonal element of L. Step 2: Select an index set S corresponding to the most significant k entries in the imax-th column of L. Step 3: Use the principle eigenvector of LS as the estimate of z Cn and re-scale it to z = λ. Output: z.
Open Source Code No The paper does not provide any statements about releasing source code or links to a code repository.
Open Datasets No In our experiments, the sampling vectors {ai}m i=1 are n-dimensional standard complex Gaussian random vectors. The input k-sparse signal x Cn has supp(x) generated at random and nonzero elements i) drawn from standard complex Gaussian or ii) being 1 s, which are called sparse Gaussian and sparse 0-1 signal, respectively.
Dataset Splits No The paper describes generating synthetic data for experiments and conducting a specified number of independent trials (e.g., '1,000 independent trials', '200 independent trials'), but does not specify dataset splits (e.g., train/validation/test) in the conventional sense for pre-existing datasets.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper does not list specific software dependencies with version numbers used for the experiments.
Experiment Setup Yes Recall that the original TP involves two hyper-parameters η1, η2. We optimized them and set η1 = 0.2, η2 = 5. To show the importance of optimization, we use TP-UD to represent TP with un-designed hyper-parameters η1 = 0.9, η2 = 1.1 and test its performance.