Disentangled behavioural representations
Authors: Amir Dezfouli, Hassan Ashtiani, Omar Ghattas, Richard Nock, Peter Dayan, Cheng Soon Ong
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the performance of our framework on synthetic data as well as a dataset including the behavior of patients with psychiatric disorders. |
| Researcher Affiliation | Collaboration | 1Data61, CSIRO 2Mc Master University 3University of Chicago 4Australian National University 5University of Sydney 6Max Planck Institute |
| Pseudocode | No | The paper describes its model and training objective using prose and mathematical formulas, but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | BD dataset. This dataset [Dezfouli et al., 2019] comprises behavioural data from 34 patients with depression, 33 with bipolar disorder and 34 matched healthy controls. |
| Dataset Splits | Yes | We generated N = 1500 agents (saving 30% for testing). The test data was used for determining the optimal number of training iterations (early stopping). Out of the 12 sequences of each subject, 8 were used for training and 4 for testing to determine the optimal number of training iterations (see Figure S7 for the training curves and Supplementary Material for more details). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory, or cloud computing specifications used for running the experiments. |
| Software Dependencies | No | We use the automatic differentiation in Tensorflow [Abadi et al., 2016]. The smoothed black lines were calculated using method gam in R [Wood, 2011]. These mentions do not include specific version numbers for the software. |
| Experiment Setup | No | The model parameters Θenc and Θdec were trained based on the above objective function and using gradient descent optimisation method [Kingma and Ba, 2014]. This describes the optimization method but lacks specific hyperparameters like learning rate, batch size, or number of epochs in the main text. |