Geometric Analysis of Matrix Sensing over Graphs

Authors: Haixiang Zhang, Ying Chen, Javad Lavaei

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Besides the theoretical guarantees, we numerically illustrate the close relation between the Ω-RIP condition and the optimization complexity. In this section, we show how the optimization complexity is related to the Ω-RIP constant δ and the sampling rate p via a numerical example.
Researcher Affiliation Academia Haixiang Zhang Department of Mathematics University of California, Berkeley Berkeley, CA 94720 haixiang_zhang@berkeley.edu Ying Chen Department of IEOR University of California, Berkeley Berkeley, CA 94720 ying-chen@berkeley.edu Javad Lavaei Department of IEOR University of California, Berkeley Berkeley, CA 94720 lavaei@berkeley.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include any explicit statements about releasing source code for the methodology described, nor does it provide a direct link to a code repository.
Open Datasets No In this example, we choose a random orthogonal matrix V Rn n and define the loss function to be fc[MΩ; (V M V T )Ω] := 1 2[M (V M V T )]Ω: (c I + H) : [M (V M V T )]Ω, M Rn n, where c R is a hyper-parameter and the tensor H and the ground truth M are defined in Section 3.2. We generate 100 independent problem instances and compute the success rate of the gradient descent algorithm with random initialization.
Dataset Splits No The paper generates '100 independent problem instances' for numerical illustrations but does not describe any train/validation/test dataset splits, cross-validation, or other specific data partitioning methodology.
Hardware Specification No The paper does not provide any specific hardware details such as CPU/GPU models, processor types, or memory used for running the experiments.
Software Dependencies No The paper mentions using 'Burer-Monterio factorization' and 'perturbed accelerated gradient descent algorithm [23]' but does not list specific software names with version numbers (e.g., library versions, solver versions).
Experiment Setup Yes We choose the problem size to be n = 10 and r = 5. The regularization parameters are α = 10 and λ = 100. The set of sampling rates and Ω-RIP2r,2r constants are p {0.7, 0.75, . . . , 0.95, 1.0}, δ {0.2, 0.25, . . . , 0.75, 0.8}. We solve each problem instance by the Burer-Monterio factorization and the perturbed accelerated gradient descent algorithm [23], where the constant step size is 0.007/c.