Partition and Code: learning how to compress graphs

Authors: Giorgos Bouritsas, Andreas Loukas, Nikolaos Karalias, Michael Bronstein

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, Pn C yields significant compression improvements on diverse real-world networks.1 This work was done while Giorgos Bouritsas was visiting Dr. Andreas Loukas at EPFL, Lausanne. 1The source code is publicly available at https://github.com/gbouritsas/Pn C 35th Conference on Neural Information Processing Systems (Neur IPS 2021). ... We evaluate our framework on diverse real-world graph distributions and showcase compression gains with respect to both conventional and advanced baseline compressors, in observed and unseen data. ... 7 Empirical Results
Researcher Affiliation Collaboration Giorgos Bouritsas Imperial College London, UK g.bouritsas@imperial.ac.uk Andreas Loukas EPFL, Switzerland andreas.loukas@epfl.ch Nikolaos Karalias EPFL, Switzerland nikolaos.karalias@epfl.ch Michael M. Bronstein Imperial College London / Twitter, UK m.bronstein@imperial.ac.uk
Pseudocode No The paper describes the algorithm in text form (e.g., 'Our algorithm proceeds by iteratively sampling...') and refers to Appendix B.4 for details, but no explicit pseudocode or algorithm block is presented in the provided text.
Open Source Code Yes 1The source code is publicly available at https://github.com/gbouritsas/PnC
Open Datasets Yes We evaluate our framework in a variety of datasets: small molecules, proteins and social networks [98 102]. ... Tables 1 and 2 report the compression quality of each method measured in terms of the average number of bits required to store each edge in a dataset (bpe). ... [99] John J Irwin, Teague Sterling, Michael M Mysinger, Erin S Bolstad, and Ryan G Coleman. Zinc: a free tool to discover chemistry for biology. Journal of chemical information and modeling, 52(7):1757 1768, 2012.
Dataset Splits No The paper mentions datasets used for evaluation but does not specify details regarding training, validation, or test dataset splits (e.g., percentages, counts, or specific split methodologies).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, or CUDA versions) required for reproducibility.
Experiment Setup No The paper describes the model parametrization and objective function but does not provide specific details such as learning rates, batch sizes, number of training epochs, or optimizer configurations in the main text.