Near-Isometric Properties of Kronecker-Structured Random Tensor Embeddings

Authors: Qijia Jiang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we numerically investigate (1) embedding dimension scaling with d for the two types of random embeddings in Section 4 and 5; (2) signal recovery from random Gaussian measurements as elaborated in Section 6.1 where the signal is rank-1 belonging to a product of cones.
Researcher Affiliation Academia Qijia Jiang Lawrence Berkeley National Laboratory qjiang@lbl.gov
Pseudocode No No structured pseudocode or algorithm blocks were found.
Open Source Code No The paper references a 'Matlab Tensor Toolbox' with a URL, but this is a third-party tool, not the authors' own source code for their methodology. No explicit statement or link to their own code was provided.
Open Datasets No For the first experiment, we let n = 10, d = 5 and pick each factor {uj} to be 20% sparse. The figure below reports the average distortion of the embedding |k Sxk2 1| over 25 runs for both the row-wise tensored and recursive sketch with Gaussian random factors.
Dataset Splits No No specific dataset split information (percentages, sample counts, or citations to predefined splits) for training, validation, or testing was found.
Hardware Specification No The paper's own checklist explicitly states 'N/A' for hardware specification, and no specific hardware details were found in the text.
Software Dependencies No We use the tucker-als function from the Matlab Tensor Toolbox1 for computing the best rank-(1, 1, 1) tensor approximation, after which gradient update is made on each factor followed by 1 projection. (1http://www.tensortoolbox.org)
Experiment Setup Yes For the first experiment, we let n = 10, d = 5 and pick each factor {uj} to be 20% sparse. The figure below reports the average distortion of the embedding |k Sxk2 1| over 25 runs for both the row-wise tensored and recursive sketch with Gaussian random factors. ... We set each factor {uj} to be 20% sparse and let d = 3, n = 10, m = 2 0.8 n d for 2 {1, , 3} and record the successful recovery out of 25 trials. Stepsize is picked to be 0.1 and success is defined as L(z1, , zd) 0.1 after 500 gradient steps.