Neural Tangent Kernels Motivate Cross-Covariance Graphs in Neural Networks
Authors: Shervin Khalafi, Saurabh Sihag, Alejandro Ribeiro
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our theoretical contributions in this context are summarized next. ... We validated the insights drawn from our theoretical results via experiments on the publicly available resting state functional magnetic resonance imaging (rf MRI) data from the Human Connectome Project-Young Adult (HCP-YA) dataset (Van Essen et al., 2012). ... 4. Experiments |
| Researcher Affiliation | Academia | 1Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA. |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., a specific repository link or an explicit statement about code release) for the source code of the described methodology. |
| Open Datasets | Yes | We validated the insights drawn from our theoretical results via experiments on the publicly available resting state functional magnetic resonance imaging (rf MRI) data from the Human Connectome Project-Young Adult (HCP-YA) dataset (Van Essen et al., 2012). |
| Dataset Splits | Yes | For each individual, we created Ntrain = 1000 and Ntest = 100 training and test samples respectively by randomly sampling (without replacement) pairs of vectors z(t), z(t+1) from the time series of length N = 4500. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions software like PyTorch (Paszke et al., 2019) and the Adam optimizer (Kingma & Ba, 2014), but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | Regarding the choices of the parameters, K = 2 was chosen as (21) directly motivates using CXY as the GSO for the K = 2 case. ... The learning rate η1 = 0.0125 was chosen for training the GNN models and η2 = 50 η1 for training the graph filters. |