Latent Constraints on Unsupervised Text-Graph Alignment with Information Asymmetry
Authors: Jidong Tian, Wenqing Chen, Yitian Li, Caoyun Fan, Hao He, Yaohui Jin
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three UTGA tasks demonstrate the effectiveness of Constrained BT on the information-asymmetric challenge. |
| Researcher Affiliation | Academia | 1 Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University 2 State Key Lab of Advanced Optical Communication System and Network, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University 3 School of Software Engineering, Sun Yat-sen University frank92@sjtu.edu.cn, chenwq95@mail.sysu.edu.cn, {yitian li, fcy3649, hehao, jinyh}@sjtu.edu.cn |
| Pseudocode | No | The paper does not contain explicitly labeled pseudocode or algorithm blocks. The methodology is described using text and mathematical equations. |
| Open Source Code | No | The paper does not explicitly state that the source code for the described methodology is publicly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Dataset: We experiment on three available datasets: Logic2Text (Chen et al. 2020), Logic NLI (Tian et al. 2021), and CLUTRR (Sinha et al. 2019). |
| Dataset Splits | No | The paper mentions 'Training Setting' but does not provide specific details on training, validation, and test dataset splits (e.g., percentages or exact counts) needed for reproduction. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions models and frameworks used (e.g., RoBERTa, Graph RNN, GPT-2, GAT) but does not provide specific version numbers for these software dependencies or for programming languages and libraries (e.g., Python, PyTorch versions). |
| Experiment Setup | No | The paper describes the 'Iterative Training' strategy and mentions 'Multi-Task Training' with 'hyperparameters of λ1, λ2, λ3, and λ4', but it does not provide the specific values for these hyperparameters (e.g., learning rate, batch size, number of epochs) or detailed training configurations. |