A Deep Choice Model

Authors: Makoto Otsuka, Takayuki Osogami

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that the DCM adequately learns the choice that involves both of the two complexities in human choice.
Researcher Affiliation Collaboration Makoto Otsuka CREST, JST 4-1-8 Honcho, Kawaguchi, Saitama 332-0012, Japan motsuka@ucla.edu Takayuki Osogami IBM Research Tokyo 19-21 Hakozaki, Chuo-ku, Tokyo 103-8510, Japan osogami@jp.ibm.com
Pseudocode No The paper describes the model architecture and training algorithm in text and mathematical formulas but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes Specifically, we use the gray-scale images of handwritten digits from the MNIST dataset1. 1http://yann.lecun.com/exdb/mnist/index.html.
Dataset Splits No The paper describes using a set of 50,000 MNIST images for deep learning and another 300 images for training the choice model, and a separate 300 images for testing. However, it does not explicitly mention a validation set or validation split for the choice model training.
Hardware Specification Yes We ran the experiments on a Windows workstation having 16 cores of Intel Xeon CPU E5-2670 2.6 GHz and 64 GB memory.
Software Dependencies No The paper mentions 'Pylearn2 (Goodfellow et al. 2013)' as a tool used, but does not provide specific version numbers for Pylearn2 or any other software libraries.
Experiment Setup Yes The DCM is trained for 20 epochs, where all of the 3,000 pairs of (X, Y) are used as training data in each epoch, and the parameters of the DCM are updated with stochastic gradient descent (Eq. 13) using the minibatches of 10 images. The learning rate is set to 0.001.