Sourcerer: Sample-based Maximum Entropy Source Distribution Estimation
Authors: Julius Vetter, Guy Moss, Cornelius Schröder, Richard Gao, Jakob H Macke
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We benchmark our method on several tasks, and show that it can recover source distributions with substantially higher entropy than recent source estimation methods, without sacrificing the fidelity of the simulations. Finally, to demonstrate the utility of our approach, we infer source distributions for parameters of the Hodgkin-Huxley model from experimental datasets with hundreds of single-neuron measurements. |
| Researcher Affiliation | Academia | 1Machine Learning in Science, Excellence Cluster Machine Learning, University of Tübingen 2Tübingen AI Center 3Department Empirical Inference, Max Planck Institute for Intelligent Systems Tübingen, Germany |
| Pseudocode | Yes | Pseudocode for Sourcerer is provided in Algorithm 1. |
| Open Source Code | Yes | Code available at https://github.com/mackelab/sourcerer |
| Open Datasets | Yes | Using this surrogate, we estimate source distributions from a real-world dataset of electrophysiological recordings. The dataset [52] consists of 1033 electrophysiological recordings from the mouse motor cortex. |
| Dataset Splits | No | For all tasks except the Hodgkin-Huxley task (where the observed dataset is experimentally measured), we generate two datasets of observations of equal size from the same reference source distribution. The first is used to train the source model, and the second is used to evaluate the quality of the learned source. |
| Hardware Specification | Yes | All numerical experiments reported in this work were performed on GPU using an NVIDIA A100 GPU. |
| Software Dependencies | No | We use Py Torch [46] for the source distribution estimation and hydra [61] to track all configurations. |
| Experiment Setup | Yes | For the benchmark tasks, we used T = 500 linear decay steps from λt=0 to λt=T = λ and optimized the source model using the Adam optimizer with a learning rate of 10 4 and weight decay of 10 5. The two high dimensional simulators were optimized with a higher learning rate of 10 3 and T = 50 linear decay steps. |