Learning Functional Transduction
Authors: Mathieu Chalvidal, Thomas Serre, Rufin VanRullen
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Numerical experiments. Table 1: RMSE and compute costs of regression over 50 unseen datasets with n 50 examples. Figure 2: Left: RMSEs (and 95% C.I) on unseen operators as a function of the dataset size. |
| Researcher Affiliation | Collaboration | Mathieu Chalvidal Capital Fund Management Paris, France mathieu.chalvidal@gmail.com Thomas Serre Carney Institute for Brain Science Brown University, U.S. thomas_serre@brown.edu Rufin Van Rullen Centre de Recherche Cerveau & Cognition CNRS, Universite de Toulouse, France rufin.vanrullen@cnrs.fr |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the described methodology. |
| Open Datasets | Yes | Data is taken from the ERA5 reanalysis (Hersbach et al., 2020) publicly made available by the ECMWF... and MNIST-like images (Le Cun and Cortes, 2010). |
| Dataset Splits | No | Our meta-learning objective is defined as: J pθq ED,T ÿ j PT Lp Tθp Dtrain O qpvjq, Opvjqq ı (7) which can be tackled with gradient-based optimization w.r.t parameters θ provided L is differentiable (see S.I for details). In order to estimate gradients of (7), we gather a meta-dataset of M operators example sets p DOmqmďM and form, at each training step, a Monte-Carlo estimator over a batch of k datasets from this meta-dataset with random train/test splits p Tkq. While it mentions 'random train/test splits', it does not provide specific details like percentages, sample counts, or a fixed random seed for reproducibility. |
| Hardware Specification | Yes | All the computation is carried on a single Nvidia Titan Xp GPU with 12GB memory. |
| Software Dependencies | No | The paper mentions using the Adam optimizer but does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | In all experiments, we use the Adam optimizer (Kingma and Ba, 2014) to train for a fixed number of steps with an initial learning rate gradually halved along training. All the computation is carried on a single Nvidia Titan Xp GPU with 12GB memory. Further details can be found in S.I. Specifically, we varied the correlation length (C.L) of the Gaussian processes used to generate functions δpxq and νpxq and specified a different target time t1 1. varying diffusion coefficient ν P r0.1, 0.5s. |