Graph Component Contrastive Learning for Concept Relatedness Estimation
Authors: Yueen Ma, Zixing Song, Xuming Hu, Jingjing Li, Yifei Zhang, Irwin King
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on three datasets show significant improvement over the state-of-the-art model. Detailed ablation studies demonstrate that our proposed approach can effectively capture the high-order relationship among concepts. We conduct comprehensive experiments with three different Transformer models on three datasets. |
| Researcher Affiliation | Academia | 1The Chinese University of Hong Kong, 2Tsinghua University |
| Pseudocode | No | The paper describes the proposed methods using textual explanations and mathematical equations, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available on Github1. 1Github: https://github.com/Panmani/GCCL |
| Open Datasets | Yes | We use the official dataset split for WORD whose train-test ratio is approximately 2:1. Since CNSE and CNSS do not provide an official dataset split, they are split randomly with a train-dev-test ratio of 7:2:1. |
| Dataset Splits | Yes | Since CNSE and CNSS do not provide an official dataset split, they are split randomly with a train-dev-test ratio of 7:2:1. |
| Hardware Specification | Yes | Experiments are conducted on four Nvidia TITAN V GPUs. |
| Software Dependencies | No | The paper mentions software components like 'Adam W optimizer' and specific 'Transformer models (BERT, RoBERTa, XLNet)', but does not provide version numbers for any libraries or programming languages. |
| Experiment Setup | Yes | We use the Adam W optimizer (Loshchilov and Hutter 2019) with learning rate = 1e-5 and ϵ = 1e-8, following a linear schedule. The Transformer models are trained for 5 epochs. For GC-NCE, we use α = 10. For Mo Co (He et al. 2020; Chen et al. 2020b), we use queue size Q = 32, momentum coefficient m = 1 - 1e-4, and temperature τ = 0.1. We use β = 0.1 for the overall loss. |