Learning Disentangled Behavior Embeddings

Authors: Changhao Shi, Sivan Schwartz, Shahar Levy, Shay Achvat, Maisan Abboud, Amir Ghanayim, Jackie Schiller, Gal Mishne

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our methods on two multi-session video datasets: the Hand-reach dataset [19] we collected, and the Wide-Field Calcium Imaging (WFCI) dataset [24, 25]. Compared to competing approaches, DBE and VDBE enjoy superior performance on downstream tasks such as fine-grained behavioral motif generation and behavior decoding.
Researcher Affiliation Academia Changhao Shi University of California, San Diego cshi@ucsd.edu Sivan Schwartz , Shahar Levy, Shay Achvat, Maisan Abboud, Amir Ghanayim Technion Israel Institute of Technology {sivan.s,shahar86,shay.achvat,maisanabboud,amir.gh122}@campus.technion.ac.il Jackie Schiller Technion Israel Institute of Technology jackie@technion.ac.il Gal Mishne University of California, San Diego gmishne@ucsd.edu
Pseudocode No The paper does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Code is available at https://github.com/Mishne-Lab/Disentangled-Behavior-Embedding.
Open Datasets No The paper uses the 'Hand-reach dataset we collected' and the 'Wide-Field Calcium Imaging (WFCI) dataset [24, 25]', but does not provide specific links, DOIs, repositories, or explicit statements about their public availability. For the Hand-reach dataset, it mentions 'we collected', and for WFCI, it references papers without providing access details to the dataset itself.
Dataset Splits No The paper states 'We randomly select 80% of the videos as training set and use the remaining 20% as test set' for the WFCI dataset. For the Hand-reach dataset, it mentions using '10% of the videos' for manual labeling and 'the rest of the videos are used as the training set for motif segmentation methods'. There is no explicit mention or size given for a 'validation' split.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions various tools and models like Deep Lab Cut (DLC), Behave Net, and VAME, but does not specify version numbers for any software components or libraries used in their implementation.
Experiment Setup Yes We use the following evidence lower bound (ELBO) for optimization: Lθ,φ(x1:T ) = Eqφ(c1:T ,g1:T ,z1) log pθ(xt|zt) αDKL(qφ(ct|x1:t)||pθ(ct|zt 1)) βDKL(qφ(gt|x1:t)||pθ(gt)) γDKL(qφ(z1|x1:C)||pθ(z1)), where α, β and γ are the trade-off parameters that control the information flow to each stochastic latent variables c1:T , g1:T and z1 respectively [12].