Entrainment2Vec: Embedding Entrainment for Multi-Party Dialogues
Authors: Zahra Rahimi, Diane Litman8681-8688
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental evaluations demonstrate that the group entrainment embedding improves performance for the downstream task of predicting group outcomes compared to the state-of-the-art methods. and 4 Experiments and Results To evaluate the utility of our entrainment representations, we use them to measure entrainment in the freely available Teams Corpus (Litman et al. 2016). |
| Researcher Affiliation | Academia | Zahra Rahimi, Diane Litman Intelligent Systems Program University of Pittsburgh Pittsburgh, PA 15260 {zar10, dlitman}@pitt.edu |
| Pseudocode | No | The paper describes its methods and approaches using textual descriptions and mathematical equations but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps formatted like code. |
| Open Source Code | No | The paper does not include an explicit statement about releasing source code or provide a link to a code repository for the methodology described. |
| Open Datasets | Yes | To evaluate the utility of our entrainment representations, we use them to measure entrainment in the freely available Teams Corpus (Litman et al. 2016). |
| Dataset Splits | Yes | We use support vector machines with RBF kernel and perform leave-one-out cross validation. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware (e.g., GPU/CPU models, memory, or cloud instance types) used to run the experiments. |
| Software Dependencies | No | The paper mentions using the RMSProp optimization algorithm but does not specify any software libraries, frameworks, or their version numbers used in the implementation (e.g., Python version, TensorFlow/PyTorch version). |
| Experiment Setup | Yes | We have minimum 100 and maximum 150 epochs and stop early if the training loss is smaller than 0.1 to avoid overfitting. The batch size is set to 20. RMSProp optimization algorithm is used with learning rate equal to 0.001. |