Learning interpretable control inputs and dynamics underlying animal locomotion

Authors: Thomas Soares Mullen, Marine Schimel, Guillaume Hennequin, Christian K. Machens, Michael Orger, Adrien Jouary

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We train a family of RNNs and show that sparse control signals activated at the onset of actions are sufficient for reproducing the observed postural sequences. (...) Next, we apply the method to modeling continuous locomotion in C. elegans. We evaluated models based on both (i) their ability to reconstruct the data (fraction R2 of variance explained by the mean of the posterior predictive distribution over observations), and (ii) the sparsity of the inferred control signals.
Researcher Affiliation Academia Thomas Soares Mullen1 , Marine Schimel2 , Guillaume Hennequin2, Christian K. Machens1 Michael B. Orger1 &Adrien Jouary1 1 Department of Neuroscience, Champalimaud Foundation 2 Department of Engineering, University of Cambridge
Pseudocode No The paper describes the methods through mathematical equations and textual explanations, particularly in Section 3 and Appendices A and B, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link to its own open-source code for the described methodology. It refers to previously published methods (e.g., iLQR-VAE by Schimel et al., 2022) but not its own implementation.
Open Datasets Yes We compiled a dataset of 30800 tail angle times series classified according to the known repertoire of zebrafish larva (Marques et al. (2018), details in Appendix G).
Dataset Splits Yes At every time step we trained and tested a linear classifier using a 80%/20% split of the latent state trajectory at that time.
Hardware Specification Yes This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/T022159/1), and Di RAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk).
Software Dependencies No The paper describes the architecture and hyperparameters for the models used (e.g., Table S2 for LFADS hyperparameters), but it does not explicitly list specific version numbers for software dependencies like Python, PyTorch, or other libraries.
Experiment Setup Yes See Table S2 for important hyperparameter settings. Table S2: RNN type GRU Encoder Dimension 64 Controller Dimension 32 Control Dimension 10 Generator Dimension 120 AR(1) tau 10 AR(1) noise variance 0.1 Dropout rate 0.3 Batch size 75 Learning rate 3e-4