Learning Two-layer Neural Networks with Symmetric Inputs
Authors: Rong Ge, Rohith Kuditipudi, Zhize Li, Xiang Wang
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we provide experimental results to validate the robustness of our algorithm for both Gaussian input distributions as well as more general symmetric distributions such as symmetric mixtures of Gaussians. |
| Researcher Affiliation | Academia | Rong Ge Computer Science Department Duke University rongge@cs.duke.edu Rohith Kuditipudi Computer Science Department Duke University rohith.kuditipudi@duke.edu Zhize Li Institute for Interdisciplinary Information Sciences Tsinghua University zz-li14@mails.tsinghua.edu.cn Xiang Wang Computer Science Department Duke University xwang@cs.duke.edu |
| Pseudocode | Yes | Algorithm 1 Learning Single-layer Neural Networks |
| Open Source Code | No | No statement about open-source code release or link found. |
| Open Datasets | No | We provide experimental results to validate the robustness of our algorithm for both Gaussian input distributions as well as more general symmetric distributions such as symmetric mixtures of Gaussians. |
| Dataset Splits | No | given 10,000 training samples we plot the square root of the algorithm s error |
| Hardware Specification | No | No hardware specifications found. |
| Software Dependencies | No | No software dependencies with version numbers found. |
| Experiment Setup | No | In practice we find it is more robust to draw 10k random samples from the subspace spanned by the last k right-singular vectors of T and compute the CP decomposition of all the samples (reshaped as matrices and stacked together as a tensor) via alternating least squares (Comon et al., 2009). As alternating least squares can also be unstable we repeat this step 10 times and select the best one. |