Lovász Principle for Unsupervised Graph Representation Learning

Authors: Ziheng Sun, Chris Ding, Jicong Fan

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiments demonstrate that our Lovász principles achieve competitive performance compared to the baselines in unsupervised and semi-supervised graph-level representation learning tasks.
Researcher Affiliation Academia Ziheng Sun1,2 Chris Ding1 Jicong Fan 1,2 1School of Data Science, The Chinese University of Hong Kong, Shenzhen, China 2Shenzhen Research Institute of Big Data, Shenzhen, China zihengsun@link.cuhk.edu.cn {chrisding,fanjicong}@cuhk.edu.cn
Pseudocode Yes In Algorithm 1, we propose a constrained optimization for the 'strict Lovász principle' via projection. Algorithm 1: Constrained optimization for 'strict Lovász principle'. Algorithm 2: The definition of projection function Proj U.
Open Source Code Yes The code of our Lovász principles is publicly available on Git Hub. Corresponding author https://github.com/Sun Ziheng0/Lovasz-Principle
Open Datasets Yes We conduct the experiments on TUD benchmark datasets [Morris et al., 2020] and Ch EMBL benchmark datasets [Mayr et al., 2018; Gaulton et al., 2012].
Dataset Splits Yes We perform 10-fold cross-validation on each dataset and repeat 10 times with different random seeds and record the average accuracy (ACC) and standard deviation. For each fold, we use 80% of the total data as the unlabeled data, 10% as labeled training data, and 10% as labeled testing data.
Hardware Specification Yes We run the programming on a machine with Intel 7 CPU and RTX 3090 GPU.
Software Dependencies No The paper mentions using a '5-layer GIN [Xu et al., 2018]' and 'an SVM as the classifier' along with 'Res GCN [Chen et al., 2019]', but it does not specify version numbers for any of the software components or libraries, which are necessary for full reproducibility.
Experiment Setup Yes Specifically, we use a 5-layer GIN [Xu et al., 2018] with hidden size 128 as the representation model and an SVM as the classifier. The model is trained with a batch size of 128 and a learning rate of 0.001. For those contrastive learning methods (e.g., JOJOv2 and Auto GCL), we use 30 epochs of contrastive pre-training under the naive strategy.