Optimal Block-wise Asymmetric Graph Construction for Graph-based Semi-supervised Learning

Authors: Zixing Song, Yifei Zhang, Irwin King

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we perform extensive experiments on synthetic and real-world datasets to demonstrate its superiority to the state-of-the-art graph construction methods in GSSL.
Researcher Affiliation Academia Zixing Song The Chinese University of Hong Kong New Territories, Hong Kong SAR zxsong@cse.cuhk.edu.hk Yifei Zhang The Chinese University of Hong Kong New Territories, Hong Kong SAR yfzhang@cse.cuhk.edu.hk Irwin King The Chinese University of Hong Kong New Territories, Hong Kong SAR king@cse.cuhk.edu.hk
Pseudocode Yes Procedure Graph Weight Block Inference(z,T ,α,β,t,ϵ) Input: Distance vector z, linear mapping matrix T , balancing parameters α and β, step size t, error tolerance parameter ϵ. Output: Block graph weight vector ˆw. 1 Initialize λ0 = λ 1 and set k = 1; 3 µk = λk 1 + k 2 k+1(λk 1 λk 2); 4 wk = h T µk z 2(T wk t 1µk) + 1 (T wk t 1µk) (T wk t 1µk) + 4αt 11; 6 λk = µk t(T wk uk); 8 while λk λk 1 > ϵ; 9 return ˆw = wk;
Open Source Code No No explicit statement or link providing access to the source code for the described methodology was found.
Open Datasets Yes We perform extensive experiments on synthetic and real-world datasets to demonstrate its superiority...Table 1: Description of datasets Dataset #Samples n #Features d #Classes c ORHD 5,620 64 10 USPS 9,298 256 10 COIL100 7,200 1,024 100 TDT2 9,394 36,771 30 MNIST 70,000 784 10 EMNIST Letters 145,600 784 20
Dataset Splits No The paper mentions 'label rate is ten labeled samples per class' and 'low label rates' for training. However, it does not provide explicit percentages or counts for training, validation, or test dataset splits of the entire dataset, nor does it mention a dedicated 'validation' set.
Hardware Specification No No specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments were provided in the paper.
Software Dependencies No No specific software dependencies with version numbers were mentioned (e.g., programming languages, libraries, or frameworks with their versions).
Experiment Setup No The paper mentions that 'All the hyper-parameters are fine-tuned with the grid search method' and specifies the 'default label inference algorithm is LGC, and the label rate is ten labeled samples per class.' However, it does not provide concrete values for the hyperparameters (e.g., α, β, learning rates) used in the BAGL algorithm during experiments.