Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Stochastic gradient updates yield deep equilibrium kernels
Authors: Russell Tsuchida, Cheng Soon Ong
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The DEKer empirically outperforms related neural network kernels on a series of benchmarks. 1 4 Experiments Recall that the ff DEKer is defined for finite SGD iteration Ď„ as an inner product of finite m-dimensional features. The DEKer is defined for finite SGD iteration Ď„ as a limit as m of an inner product of m-dimensional features. The â„“DEKer is defined as a limit as SGD iteration Ď„ of the DEKer. Although the DEKer and â„“DEKer are defined in terms of infinite dimensional features, evaluations of the DEKer and â„“DEKer are scalar values and can be used to form matrices with a finite number of rows and columns. These matrices can be used in kernel methods to build predictive algorithms. In our analysis in the previous section, we considered 2 2 kernel matrices. Such analysis extends to N N kernel matrices, where each element of the N N kernel matrices may be related to an element of a 2 2 kernel matrix. All our implementations are vectorised, so that they operate on N N matrices. 4.1 Measuring finite-width effects We empirically measure the similarity of (finite-Ď„, finite-d) ff DEKer matrices and (infinite-Ď„, infinited) â„“DEKer matrices using the centered variant (Cortes et al., 2012) of kernel alignment (Cristianini et al., 2001), abbreviated CKA, as d increases. We vary d between 5 and 500 in steps of 5 and choose m = d3/2. For control, we also measure the CKA between the (finite-Ď„, finite-d) ff DEKer and the squared exponential kernel (SEK). See Figure 2, and Appendix J.1 for full details on the experimental setup. As expected (Theorem 4), the CKA between the DEKer matrices becomes larger as d and m increase, but not between the SEK and finite DEKer. 4.2 Inference using the DEKer We use the DEKer for kernel ridge regression (Saunders, 1998) (KRR) (cf Gaussian process regression (Rasmussen & Williams, 2006)) on a suite of benchmarks. For each dataset, we first partition the data into an 80 20 train-test split. Using the training set, we perform 5-fold cross-validation for hyperparameter selection using the default settings of sci-kit learn s Grid Search CV, which performs model selection based on the coefficient of determination. The hyperparameter grid we search over is described in Table 4, Appendix J.2. We then compute the RMSE on the held out test set using all training data. We repeat this procedure for 100 different random shuffles of dataset, and find the sample average and standard deviation RMSE over the random shuffles. The results are reported in Table 2. |
| Researcher Affiliation | Academia | Russell Tsuchida EMAIL Data61, CSIRO Cheng Soon Ong EMAIL Data61, CSIRO Australian National University |
| Pseudocode | No | The paper provides mathematical descriptions and derivations of the methods but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Code available at https://github.com/Russell Tsuchida/dek.git. |
| Open Datasets | Yes | The datasets are UCI benchmarks yacht (Gerritsma et al., 2013), diabetes (Kahn), energy (first and second value independently) (Tsanas & Xifara, 2012), concrete (Yeh, 2007) and wine (Cortez et al., 2009). |
| Dataset Splits | Yes | For each dataset, we first partition the data into an 80 20 train-test split. Using the training set, we perform 5-fold cross-validation for hyperparameter selection using the default settings of sci-kit learn s Grid Search CV, which performs model selection based on the coefficient of determination. |
| Hardware Specification | No | The paper does not explicitly mention any specific hardware used for running its experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions "sci-kit learn s Grid Search CV" but does not provide a specific version number for this or any other software dependency. |
| Experiment Setup | Yes | The hyperparameter grid we search over is described in Table 4, Appendix J.2. We then compute the RMSE on the held out test set using all training data. We repeat this procedure for 100 different random shuffles of dataset, and find the sample average and standard deviation RMSE over the random shuffles. The results are reported in Table 2. The input X is preprocessed by subtracting the sample average and dividing by the sample standard deviation of each feature. Additionally, the target data y is mean-centered and scaled by the sample standard deviation. |