Intersubjectivity and Sentiment: From Language to Knowledge
Authors: Lin Gui, Ruifeng Xu, Yulan He, Qin Lu, Zhongyu Wei
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluations on the IMDB, Yelp 2013 and Yelp 2014 datasets show that the proposed approach has achieved the state-of-the-art performance. |
| Researcher Affiliation | Academia | 1Laboratory of Network Oriented Intelligent Computation, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, China 2School of Engineering and Applied Science, Aston University, United Kingdom 3Department of Computing, the Hong Kong Polytechnic University, Hong Kong 4Computer Science Department, The University of Texas at Dallas, Texas 75080, USA |
| Pseudocode | No | The paper describes procedures in narrative text and mathematical equations but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statements about releasing code, nor does it provide links to a code repository. |
| Open Datasets | Yes | We evaluate our algorithm on three product review datasets, including IMDB [Diao et al., 2014] and the Yelp Dataset Challenge in 2013 and 2014. |
| Dataset Splits | No | The paper mentions 'training data' and 'test set' but does not specify details on validation splits (e.g., percentages, sample counts, or explicit mention of a validation set) within the text. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper mentions tools like 'word2vec' and 'CNN' but does not specify version numbers for any software components or libraries used in their implementation. |
| Experiment Setup | No | The paper mentions 'the training of intersubjectivity embeddings uses the top 20k terms and the negative sampling method' but lacks specific hyperparameters for the neural network training, such as learning rates, batch sizes, or the number of epochs. |