Neural Tangent Kernel Maximum Mean Discrepancy
Authors: Xiuyuan Cheng, Yao Xie
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments on synthetic and real-world datasets validate the theory and demonstrate the effectiveness of the proposed NTK-MMD statistic. |
| Researcher Affiliation | Academia | Xiuyuan Cheng Department of Mathematics Duke University xiuyuan.cheng@duke.edu Yao Xie H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology yao.xie@isye.gatech.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Code available at https://github.com/xycheng/NTK-MMD/. |
| Open Datasets | Yes | We take the original MNIST dataset, which contains 28 × 28 gray-scale images... The Microsoft Research Cambridge-12 (MSRC-12) Kinect gesture dataset [18]. |
| Dataset Splits | No | The paper mentions splitting data into training and test sets but does not explicitly state a validation set or its split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using a '2-layer network' and 'soft-plus activation', and refers to 'SGD' and 'Adam' optimizers, but does not specify software names with version numbers (e.g., PyTorch 1.x, Python 3.x). |
| Experiment Setup | Yes | The online training is of 1 epoch (1 pass over the training set) with batch-size = 1. The bootstrap estimate of test threshold uses nboot = 400 permutations. The testing power is approximated by nrun = 500 Monte Carlo replicas... using a 2-layer network (1 hidden layer) with soft-plus activation. |