Permutation-Based Hypothesis Testing for Neural Networks
Authors: Francesca Mandel, Ian Barnett
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the performance of our proposed tests through several simulation studies. Where applicable, we include comparisons to competing methods. Additionally, we apply the tests to evaluate feature associations in pediatric concussion data and to test genetic links to Parkinson s disease. |
| Researcher Affiliation | Academia | Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA |
| Pseudocode | Yes | Algorithm 1: Nonlinearity Test and Algorithm 2: Association Test |
| Open Source Code | No | The paper does not include an unambiguous statement that the authors are releasing the code for the work described, nor does it provide a direct link to a source-code repository. |
| Open Datasets | Yes | Our first application uses pediatric concussion data from the Center for Injury Research and Prevention at the Children s Hospital of Philadelphia (Corwin et al. 2021). Our second application uses genomic data from the Accelerating Medicines Partnership Parkinson s Disease (AMP PD) project (2019 v1 release) (Iwaki et al. 2021). |
| Dataset Splits | No | The paper mentions using a 'validation set' to minimize loss during training, but it does not provide specific dataset split information such as exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology needed to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper describes algorithms and training parameters but does not provide specific ancillary software details, such as library or solver names with version numbers (e.g., Python 3.8, PyTorch 1.9), needed to replicate the experiment. |
| Experiment Setup | Yes | We fit a one-layer network with 40 nodes and sigmoid activation. The network is trained for 150 epochs using stochastic gradient descent, L2 regularization, and a decreasing learning rate at every epoch. We employ stochastic gradient descent with an initial learning rate of 0.005 for NN-SCAT and 0.01 for NNPCSI; the learning rates decay by 1.5% at each epoch. The number of nodes (20) and regularization parameter (0.03) minimize validation loss. The networks train for 175 epochs. |