Learning Label Initialization for Time-Dependent Harmonic Extension.
Authors: Amitoz Azad
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This section walks through the research questions (RQs) and discusses the experiments done to answer them. We benchmark the improved solution against the several state-of-the-art methods for node classification. Our approach yields competitive results (Table 2, Row 08). |
| Researcher Affiliation | Academia | Amitoz Azad University of Caen Normandy amitoz.sudo@gmail.com |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code. The scripts used for the experiments and the instructions to run are available at Github: https://github.com/aGIToz/Learning-Label-Initialization. |
| Open Datasets | Yes | Handcrafted graphs. We created knn graphs for two datasets: MNIST [Le Cun et al., 1998] and F(fashion)MNIST [Xiao et al., 2017]. Non-handcrafted graphs. We used the popular citation graphs, which have been widely used for benchmarking Graph nets. These are Cora, Citeseer, Pubmed [Sen et al., 2008]. Datasets. Apart from the citation graphs, we include three more real-world datasets: Amazon Photo [Shchur et al., 2018], County FB [Jia and Benson, 2021], FMAsia [Rozemberczki and Sarkar, 2020]. |
| Dataset Splits | Yes | For the MNIST and FMNIST datasets, we kept the seed size (labeled nodes) 20 nodes per class, for validation 500 nodes per class, and the rest for the test set. The citation graphs come with a prebuilt split (a.k.a. planetoid split), which is very often used for benchmarking node classification with GNNs. These datasets come with default training, validation, and test sets. Therefore, for Photo, County FB, and FMAsia, we keep the split size 40% (train), 40% (valid), 20% (test) per class and evaluate the performance for 10 different splits. |
| Hardware Specification | Yes | The search was done using NVIDIA 1080Ti and P100 GPUs. The final evaluation was done on 1080Ti. |
| Software Dependencies | No | The paper mentions using 'Py Torch framework', 'torchgeometric', and 'torchdiffeq' but does not specify their version numbers, which are required for full reproducibility. |
| Experiment Setup | Yes | Training setting. We used Adam or RMSprop optimizer in all our experiments. The optimization was done with full-batch gradient descent. The L2 weight regularization and dropouts were also used in all optimizations. Most of the MLP used in the architecture (Figure 1) are with one or two hidden channels with Re LU activations. The loss function was cross entropy. The hyperparameters of the Solver1 and Solver2 were kept same. The Dormand-Prince (dopri5) numerical scheme was used for all the experiments related to (15). The validation splits were used for tuning the hyperparams and for the final model selection (saving the front and the weights). Hyperparameter optimization was done using a large random search (coarse to fine). We used either categorical distribution or uniform distribution on the hyperparams while running the random search. |