Data Imputation with Iterative Graph Reconstruction

Authors: Jiajun Zhong, Ning Gui, Weiwei Ye

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment results on eight benchmark datasets show that IGRM yields 39.13% lower mean absolute error compared with nine baselines and 9.04% lower than the second-best.
Researcher Affiliation Academia School of Computer Science and Engineering, Central South University, China 214711110@csu.edu.cn, ninggui@csu.edu.cn, 634948676@qq.com
Pseudocode Yes Algorithm 1: General Framework for IGRM
Open Source Code Yes Our code is available at https://github.com/G-AILab/IGRM.
Open Datasets Yes We evaluate IGRM on eight real-world datasets from the UCI Machine Learning repository (Asuncion and Newman 2007) and Kaggle1.
Dataset Splits No The paper states, 'We treat the observed values as train set and missing values as the test set,' but does not explicitly define a separate validation set or its split percentage.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, or memory) used for running the experiments.
Software Dependencies No The paper mentions specific optimizers and activation functions (e.g., 'Adam optimizer', 'Re LU activation function') but does not list any specific software dependencies or libraries with their version numbers.
Experiment Setup Yes IGRM employs three variant Graph SAGE layers with 64 hidden units for bipartite GRL and one Graph SAGE layer for friend network GRL. The Adam optimizer with a learning rate of 0.001 and the Re LU activation function is used. In the process of initializing F, we randomly connect samples with sample size |U| edges to build the initial friend network. This structure of the friend network is reconstructed per 100 epochs during the bipartite graph training. For all experiments, we train IGRM for 20,000 epochs, run five trials with different seeds and report the mean of the mean absolute error(MAE).