Revealing interpretable object representations from human behavior
Authors: Charles Y. Zheng, Francisco Pereira, Chris I. Baker, Martin N. Hebart
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We used Tensorflow (Abadi et al., 2015) to fit the model (3) to the 1,450,119 triplets collected, using a 90-10 train-validation split to pick the regularization parameter λ. |
| Researcher Affiliation | Academia | Charles Y. Zheng Section on Functional Imaging Methods National Institute of Mental Health charles.zheng@nih.gov Francisco Pereira Section on Functional Imaging Methods National Institute of Mental Health francisco.pereira@nih.gov Chris I. Baker Section on Learning and Plasticity National Institute of Mental Health martin.hebart@nih.gov Martin N. Hebart Section on Learning and Plasticity National Institute of Mental Health martin.hebart@nih.gov |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about making its source code available or links to a code repository. |
| Open Datasets | No | The collection of the Odd-one-out behavioral dataset is an ongoing project by the authors. ... We plan to collect additional triplets and release all data by the end of the study. |
| Dataset Splits | Yes | We randomly split the dataset into ntrain triplets for training and nval for choosing the sparsity regularization parameter λ. ... using a 90-10 train-validation split to pick the regularization parameter λ. |
| Hardware Specification | No | The paper mentions 'Portions of this study used the high-performance computational capabilities of the Biowulf Linux cluster at the National Institutes of Health, Bethesda, MD (biowulf.nih.gov)' but does not specify details such as GPU models, CPU models, or memory. |
| Software Dependencies | No | The paper mentions 'Tensorflow (Abadi et al., 2015)' and the 'Adam algorithm (Kingma & Ba, 2015)' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | We used the Adam algorithm (Kingma & Ba, 2015) with an initial learning rate of 0.001 to minimize the objective function, using a fixed number of 1,000 epochs over the training set, which was sufficient to ensure convergence. |