Deep Learning for Functional Data Analysis with Adaptive Basis Layers
Authors: Junwen Yao, Jonas Mueller, Jane-Ling Wang
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Across numerous classification/regression tasks with functional data, our method empirically outperforms other types of neural networks, and we prove that our approach is statistically consistent with low generalization error. Code is available at: https: //github.com/jwyyy/Ada FNN |
| Researcher Affiliation | Collaboration | 1UC Davis 2Amazon (work done prior to joining Amazon). Correspondence to: Junwen Yao <jwyao@ucdavis.edu>. |
| Pseudocode | Yes | Algorithm 1 Ada FNN Forward Pass |
| Open Source Code | Yes | Code is available at: https: //github.com/jwyyy/Ada FNN |
| Open Datasets | Yes | Electricity Data: Electricity consumption readings for 5567 London homes, where each household s electricity usage is recorded every half hour (UK Power Networks, 2015). |
| Dataset Splits | Yes | Throughout we use * to indicate the λ1, λ2 values that performed best on the validation data, as these are the hyperparameter values that would be typically used in practice. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts) used for experiments are mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers are mentioned in the paper. |
| Experiment Setup | Yes | A network with 3 hidden layers and 128 fully connected nodes per layer was used for Tasks 1-7 (real data) and our simulation studies. For Tasks 8 and 9 with small sample sizes, we used a smaller network with 2 hidden layers and 64 nodes and added dropout during training. All networks were trained up to 500 epochs (with 200-epoch early stopping patience) using mini-batches of size 128. Ada FNN was trained with 9 different combinations of the orthogonal regularization penalty λ1 {0, 0.5, 1} and L1 regularization penalty λ2 {0, 1, 2}. |