Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Analogies Explained: Towards Understanding Word Embeddings

Authors: Carl Allen, Timothy Hospedales

ICML 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We derive a probabilistically grounded de๏ฌnition of paraphrasing that we re-interpret as word transformation, a mathematical description of wx is to wy . From these concepts we prove existence of linear relationships between W2V-type embeddings that underlie the analogical phenomenon, identifying explicit error terms.
Researcher Affiliation Academia 1School of Informatics, University of Edinburgh.
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any links to open-source code or state that code is released.
Open Datasets Yes Values are computed from the text8 corpus (Mahoney, 2011). Mahoney, M. text8 wikipedia dump. http:// mattmahoney.net/dc/textdata.html, 2011. [Online; accessed May 2019].
Dataset Splits No The paper discusses data in terms of a text corpus for word embeddings (W2V, Glove) but does not specify any train/validation/test splits for experiments conducted by the authors.
Hardware Specification No The paper does not provide any specific details about the hardware used for computations or derivations.
Software Dependencies No The paper mentions W2V and Glove as models but does not list any specific software or library versions used for its derivations or illustrative examples.
Experiment Setup No The paper focuses on theoretical derivations and proofs. It does not describe an experimental setup with specific hyperparameters or training configurations for a novel method.