Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

BLEND: Behavior-guided Neural Population Dynamics Modeling via Privileged Knowledge Distillation

Authors: Zhengrui Guo, Fangxu Zhou, Wei Wu, Qichen Sun, Lishuang Feng, Jinzhuo Wang, Hao CHEN

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments across neural population activity modeling and transcriptomic neuron identity prediction tasks demonstrate strong capabilities of BLEND, reporting over 50% improvement in behavioral decoding and over 15% improvement in transcriptomic neuron identity prediction after behavior-guided distillation. Furthermore, we empirically explore various behavior-guided distillation strategies within the BLEND framework and present a comprehensive analysis of effectiveness and implications for model performance.
Researcher Affiliation Academia Zhengrui Guo The Hong Kong University of Science and Technology Beijing Institute of Collaborative Innovation EMAIL Fangxu Zhou Peking University EMAIL Wei Wu Peking University EMAIL Qichen Sun Peking University EMAIL Lishuang Feng Beihang University Beijing Institute of Collaborative Innovation EMAIL Jinzhuo Wang Peking University EMAIL Hao Chen The Hong Kong University of Science and Technology EMAIL
Pseudocode Yes A.1 SUPPLEMENTARY CONTENTS OF BLEND ALGORITHM A.1.1 BLEND ALGORITHM Algorithm 1 Behavior-guided Teacher-Student Knowledge Distillation Framework (BLEND)
Open Source Code No Code will be made available at https://github.com/dddavid4real/BLEND. Our source code will be made publicly accessible upon acceptance of this paper.
Open Datasets Yes The first is a public benchmark for neural latent dynamics model evaluation from Pei et al. (2021), named Neural Latents Benchmark 21 (NLB 21). The second is a recent, public multi-modal neural dataset from Bugeon et al. (2022)
Dataset Splits Yes All three datasets share common dimensions (2,869 trials, 140 timepoints, 182 neurons, 2 behavioral variables) and are systematically divided into train/eval splits and held-in/held-out neurons, enabling rigorous testing of neural analysis methods.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using "scikit-learn s mutual info regression function" but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes As the base model for our privileged knowledge distillation with behavior information in neural dynamics modeling, we set the hidden dimension to 64 and factor size to 32. For model training, we set the batch size to 64, learning rate to 1 10 3 with 5000 warm-up iterations and weight decay to 5 10 5. Mask ratio is set to 0.25. As the base model for our behavior-guided knowledge distillation in neural dynamics modeling, we set the Transformer layer number to 4 for MC-Maze, MC-RTT, and Area2-Bump datasets, hidden dimension to 128, and number of attention heads to 2. For model training, we set the batch size to 64, learning rate to 1 10 3 with 5000 warm-up iterations and weight decay to 5 10 5. Mask ratio is set to 0.25. As the base model for our behavior-guided knowledge distillation in transcriptomic identity prediction, we follow the same configurations in Mi et al. (2023) and set the dimension of time-invariant embedding to 64. For model architecture, 1 transformer layer with 2 attention heads is used. For model training, the batch size is set to 1024 and the learning rate is set to 1 10 3.