Neural Ideal Point Estimation Network

Authors: Kyungwoo Song, Wonsung Lee, Il-Chul Moon

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We performed the five-fold cross-validation to quantitatively evaluate the variations of NIPENs, and the performance measures are RMSE, MAE, accuracy, and negative average log-likelihood (NALL) measures. We compared nine models: five baseline models in section 4.2, and four NIPEN variations
Researcher Affiliation Academia Kyungwoo Song, Wonsung Lee, Il-Chul Moon Korea Advanced Institute of Science and Technology 291 Daehak-ro, Yuseong-gu Daejeon 34141, South Korea {gtshs2,aporia,icmoon}@kaist.ac.kr
Pseudocode No The paper describes the model architecture and inference process, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper states: "For the research community, we released the dataset on https://github.com/gtshs2/NIPEN (Politic2013 was collected from (Gu et al. 2014))". This refers to the dataset release, not explicitly the source code for the methodology.
Open Datasets Yes We used two roll-call datasets. Table 1 provides the descriptive statistics of the two datasets: Politic2013 and Politic2016. For the research community, we released the dataset on https://github.com/gtshs2/NIPEN (Politic2013 was collected from (Gu et al. 2014))
Dataset Splits Yes We performed the five-fold cross-validation to quantitatively evaluate the variations of NIPENs
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No We utilized the Tensorflow library (Abadi et al. 2016) to optimize the parameters. The paper mentions TensorFlow but does not specify its version number or any other software dependencies with versions.
Experiment Setup No The paper describes the model structure and parameter inference but does not explicitly provide hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific training schedules.