Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Learning Two-layer Neural Networks with Symmetric Inputs
Authors: Rong Ge, Rohith Kuditipudi, Zhize Li, Xiang Wang
ICLR 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we provide experimental results to validate the robustness of our algorithm for both Gaussian input distributions as well as more general symmetric distributions such as symmetric mixtures of Gaussians. |
| Researcher Affiliation | Academia | Rong Ge Computer Science Department Duke University EMAIL Rohith Kuditipudi Computer Science Department Duke University EMAIL Zhize Li Institute for Interdisciplinary Information Sciences Tsinghua University EMAIL Xiang Wang Computer Science Department Duke University EMAIL |
| Pseudocode | Yes | Algorithm 1 Learning Single-layer Neural Networks |
| Open Source Code | No | No statement about open-source code release or link found. |
| Open Datasets | No | We provide experimental results to validate the robustness of our algorithm for both Gaussian input distributions as well as more general symmetric distributions such as symmetric mixtures of Gaussians. |
| Dataset Splits | No | given 10,000 training samples we plot the square root of the algorithm s error |
| Hardware Specification | No | No hardware specifications found. |
| Software Dependencies | No | No software dependencies with version numbers found. |
| Experiment Setup | No | In practice we ο¬nd it is more robust to draw 10k random samples from the subspace spanned by the last k right-singular vectors of T and compute the CP decomposition of all the samples (reshaped as matrices and stacked together as a tensor) via alternating least squares (Comon et al., 2009). As alternating least squares can also be unstable we repeat this step 10 times and select the best one. |