If You Want to Be Robust, Be Wary of Initialization
Authors: Sofiane ENNADIR, Johannes Lutzeyer, Michalis Vazirgiannis, El Houcine Bergou
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments, spanning diverse models and real-world datasets subjected to various adversarial attacks, validate our findings. |
| Researcher Affiliation | Academia | Sofiane Ennadir KTH Stockholm, Sweden Johannes F. Lutzeyer LIX, Ecole Polytechnique IP Paris, France Michalis Vazirgiannis KTH & Ecole Polytechnique Stockholm, Sweden El Houcine Bergou UM6P Benguerir, Morocco |
| Pseudocode | No | The paper describes methods through mathematical formulations and textual descriptions of processes, but it does not contain explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The necessary code to reproduce all our experiments is available on github https://github.com/Sennadir/Initialization_effect. |
| Open Datasets | Yes | We leverage the citation networks Cora and Cite Seer [27], with additional results on other datasets provided in the Appendix G. |
| Dataset Splits | Yes | To mitigate the impact of randomness during training, each experiment was repeated 10 times, using the train/validation/test splits provided with the datasets. |
| Hardware Specification | Yes | The experiments have been run on both a NVIDIA A100 GPU where training a GCN takes around 1.2( 0.2) s. |
| Software Dependencies | No | Our implementation is built using the open-source library Py Torch Geometric (Py G) under the MIT license [12]. |
| Experiment Setup | Yes | We maintained the same hyperparameters, including a learning rate of 1e-2, 300 epochs, and a hidden feature dimension of 16 have been. |