Representations of Context in Recognizing the Figurative and Literal Usages of Idioms

Authors: Changsheng Liu, Rebecca Hwa

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental result suggests that the proposed method performs better for a wider range of idioms than previous methods. and Experiment To verify our hypothesis that the automatic recognition of idiomatic usages depends on addressing the interactions between properties of the idioms (i.e., context diversity and semantic analyzability) and the contextual representations of the idiom, we conduct a comparative study across four representative state-of-the-art methods
Researcher Affiliation Academia Changsheng Liu, Rebecca Hwa Computer Science Department University of Pittsburgh Pittsburgh, PA 15260, USA {changsheng,hwa}@cs.pitt.edu
Pseudocode No No pseudocode or algorithm blocks are present.
Open Source Code No The paper mentions using and reimplementing existing methods and tools, but does not provide a link or explicit statement about releasing their own source code for the described methodology.
Open Datasets Yes Data The data is from Sem Eval 2013 task 5B (Korkontzelos et al. 2013).
Dataset Splits Yes We run ten fold cross validation for the two supervised methods (Rajani et al. and Peng et al.). In each round of the cross validation, we randomly select half of the training sample as the example set; the remaining half of the training sample is used to learn the weight for the three representations.
Hardware Specification No No specific hardware details (GPU/CPU models, memory amounts, or processor types) are provided for the experimental setup.
Software Dependencies No The paper mentions software tools like 'gensim toolkit' and 'Liblinear' but does not specify their version numbers.
Experiment Setup Yes We empirically set the dimensionality of vector to 200. and In our case, the confidence is related to the similarity difference. and We distinguish the usage of the target expression by calculating its average similarity (using one of the similarity metrics) to both the literal and figurative example set and assign the label of the set which has higher similarity. and a variant of averaged perceptron learning is applied to learn the weights for each classifier