Deep Recursive Neural Networks for Compositionality in Language

Authors: Ozan Irsoy, Claire Cardie

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed model on the task of fine-grained sentiment classification. Our results show that deep RNNs outperform associated shallow counterparts that employ the same number of parameters. Furthermore, our approach outperforms previous baselines on the sentiment analysis task, including a multiplicative RNN variant as well as the recently introduced paragraph vectors, achieving new state-of-the-art results. We provide exploratory analyses of the effect of multiple layers and show that they capture different aspects of compositionality in language.
Researcher Affiliation Academia Ozan Irsoy Department of Computer Science Cornell University Ithaca, NY 14853 oirsoy@cs.cornell.edu Claire Cardie Department of Computer Science Cornell University Ithaca, NY 14853 cardie@cs.cornell.edu
Pseudocode No The paper contains mathematical formulations of the models but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating the release of open-source code for the methodology described.
Open Datasets Yes For experimental evaluation of our models, we use the recently published Stanford Sentiment Treebank (SST) [8]
Dataset Splits Yes We use the single training-validation-test set partitioning provided by the authors. Dropout rate is tuned over the development set out of {0, 0.1, 0.3, 0.5}. Additionally, we employ early stopping: out of all iterations, the model with the best development set performance is picked as the final model to be evaluated.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU models, CPU types).
Software Dependencies No The paper mentions methods like 'stochastic gradient descent' and 'Ada Grad' but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For the output layer, we employ the standard softmax activation... For the hidden layers we use the rectifier linear activation... We use the publicly available 300 dimensional word vectors... For regularization of the networks, we use the recently proposed dropout technique... Dropout rate is tuned over the development set out of {0, 0.1, 0.3, 0.5}... we use a small fixed additional L2 penalty (10 5)... We use stochastic gradient descent with a fixed learning rate (.01)... We update weights after minibatches of 20 sentences. We run 200 epochs for training. Recursive weights within a layer (W hh) are initialized as 0.5I + ϵ... All other weights are initialized as ϵ.