Learning Equations for Extrapolation and Control
Authors: Subham Sahoo, Christoph Lampert, Georg Martius
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4. Experimental evaluation, In Fig. 5 the numerical results and also an illustrative output of EQL and the baselines are presented. |
| Researcher Affiliation | Academia | 1Indian Institute of Technology, Kharagpur, India 2IST Austria, Klosterneuburg, Austria 3Max Planck Institute for Intelligent Systems, Tübingen, Germany. |
| Pseudocode | No | The paper describes the method using diagrams and text, but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and some data is available at https://github.com/martius-lab/EQL. |
| Open Datasets | No | The paper describes generating its own training data by sampling points and adding noise, but does not provide concrete access information for a publicly available or open dataset. |
| Dataset Splits | Yes | For all experiments, we have training data in a restricted domain, usually [ 1, 1]d corrupted with noise which is split into training and validation with 90% 10% split. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper states it was implemented in 'python based on the theano framework', but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | The following hyper-parameters were fixed: learningrate (Adam) α = 0.001, regularization (Adam) of ϵ = 0.0001, minibatch size of 20, number of units u = v = 10, i. e. 10 units per type in each layer. We use t1 = 1 4T and t2 = 19 20T, where T is the total number of epochs, large enough to ensure convergence, i. e. T = (L 1) 10000. Note, that early stopping will be disadvantageous. |