Interpretable Meta-Learning of Physical Systems
Authors: Matthieu Blanke, Marc Lelarge
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we validate these statements experimentally on various physical systems: Sections 5.1 and 5.2 deal with systems with linear parameters (as in (4.1)), on which we evaluate the interpretability of the algorithms. We then examine a non-analytical, general system in Section 5.3. We compare the performances of CAMEL with state-of-the-art meta-learning algorithms. |
| Researcher Affiliation | Academia | Matthieu Blanke Inria Paris, DI ENS, PSL Research University matthieu.blanke@inria.fr Marc Lelarge Inria Paris, DI ENS, PSL Research University marc.lelarge@inria.fr |
| Pseudocode | Yes | Algorithm 1 Gradient-based meta-training |
| Open Source Code | Yes | Our code and demonstration material are available at https://github.com/MB-29/meta-learning. |
| Open Datasets | No | The training data is generated by changing each charge s value in {1, . . . , 5}n, hence T = 5n. |
| Dataset Splits | No | The resulting adapted predictor is defined as F(x; θ, w T +1), and is evaluated by its performance on a separated test set from task T + 1, averaged over the task distribution. |
| Hardware Specification | No | No specific hardware details (GPU/CPU models, memory amounts, or detailed computer specifications) are provided. |
| Software Dependencies | No | All neural networks are trained with the ADAM optimizer Kingma & Ba (2015). |
| Experiment Setup | Yes | For Co DA, we set dξ = r, chosen according to the system learned. For all the baselines, the adaptation minimization problem (2.5) is optimized with at least 10 gradient steps, until convergence. For training, the number of inner gradient steps of MAML and ANIL is chosen to be 1, to reduce the computational time. |