Scale Up Nonlinear Component Analysis with Doubly Stochastic Gradients
Authors: Bo Xie, Yingyu Liang, Le Song
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness and scalability of our algorithm on large scale synthetic and real world datasets. |
| Researcher Affiliation | Academia | Bo Xie1, Yingyu Liang2, Le Song1 1Georgia Institute of Technology bo.xie@gatech.edu, lsong@cc.gatech.edu 2Princeton University yingyul@cs.princeton.edu |
| Pseudocode | Yes | Algorithm 1: {αi}t 1 = DSGD-KPCA(P(x), k) |
| Open Source Code | No | The paper does not contain any explicit statement about releasing the source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Molecular Space dataset contains 2.3 million molecular motifs [6]. |
| Dataset Splits | Yes | We randomly select 20% as test set and out of the remaining training set, we randomly choose 5000 as validation set to select step sizes. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU/CPU models or memory. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers. |
| Experiment Setup | Yes | In each iteration, we use a data mini-batch of size 512, and a random feature minibatch of size 128. |