Quantum-Inspired Interactive Networks for Conversational Sentiment Analysis

Authors: Yazhou Zhang, Qiuchi Li, Dawei Song, Peng Zhang, Panpan Wang

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on the MELD and IEMOCAP datasets. The experimental results demonstrate the effectiveness of the QIN model.
Researcher Affiliation Academia 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China 3Zhejiang Lab, Hangzhou, China 4Department of Information Engineering, University of Padua, Padua, Italy 5School of Computing and Communications, The Open University, United Kingdom
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper provides a link: "The detailed inference process is given on https://github.com/anonymity-anonymity/influence-model.git". However, the "anonymity-anonymity" in the URL suggests it's a temporary link for anonymous review, not concrete public access.
Open Datasets Yes We conduct experiments on the MELD3 and IEMOCAP4 datasets to validate the effectiveness of QIN model. 3This dataset is available on https://affective-meld.github.io/. 4http://sail.usc.edu/iemocap/.
Dataset Splits No The paper mentions the datasets (MELD and IEMOCAP) but does not provide specific training, validation, or test dataset splits (e.g., percentages, sample counts, or citations to predefined splits).
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory amounts, or computer specifications) used for running experiments are provided in the paper.
Software Dependencies No The paper mentions general techniques and libraries like GloVe, LSTM, and CNN, but does not provide specific version numbers for any software dependencies (e.g., Python, PyTorch, TensorFlow, scikit-learn).
Experiment Setup Yes Hyperparameters setting. In this work, we use the Glo Ve word vector [Pennington et al., 2014] to find word embeddings. The dimensionality is set to 300. All weight matrices are given their initial values by sampling from a uniform distribution U( 0.1, 0.1), and all biases are set to zeros. We set the initial learning rate to 0.001. The batch size is 60. The coefficient of L2 normalization in the objective function is set to 10 5, and the dropout rate is set to 0.5.