How Many Pairwise Preferences Do We Need to Rank a Graph Consistently?

Authors: Aadirupa Saha, Rakesh Shivanna, Chiranjib Bhattacharyya4830-4837

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also report experimental evaluations on different synthetic and real-world datasets, where our algorithm is shown to outperform the state of the art methods. 6 Experiments We conducted experiments on both real-world and synthetic graphs, comparing Pref-Rank with the following algorithms: Algorithms. (a) PR-Kron: Pref-Rank with KLS(G G) (see Eqn. (5)) (b) PR-PD: Pref-Rank with PD-Lab(G) with LS-labelling i.e. U = ULS, (c) GR: Graph Rank (Agarwal 2010), (d) RC: Rank Centrality (Negahban, Oh, and Shah 2012) and (e) IPR: Inductive Pairwise Ranking, with Laplacian as feature embedding (Niranjan and Rajkumar 2017). Figure 1: Synthetic Data: Average number of mispredictions (erℓ0-1 D (f), Eqn. (6)) vs fraction of sampled pairs (f). Figure 2: Real-World Data: Average number of mispredictions (erℓ0-1 D (f), Eqn. (6)) vs fraction of sampled pairs (f).
Researcher Affiliation Collaboration Aadirupa Saha Department of CSA Indian Institute of Science, Bangalore aadirupa@iisc.ac.in Rakesh Shivanna Google Brain, Mountain View rakeshshivanna@google.com Chiranjib Bhattacharyya Department of CSA and Robert Bosch Center for Cyberphysical Systems Indian Institute of Science, Bangalore chiru@iisc.ac.in
Pseudocode Yes Algorithm 1 Pref-Rank Input: G(V, E) and subset of preferences (Sm, y Sm). Init: Pairwise graph embedding U Rd N, d N+. Get w by solving the SVM objective (2). Compute preference scores f = U w . Count number of wins c(i) for each node i V : c(i) := P {k=(ik,jk)|ik=i} 1 f k > 0 + P {k=(ik,jk)|jk=i} 1 f k < 0 Return: Ranking of nodes ˆσn argsort(c).
Open Source Code No The paper mentions 'A full version of this paper is available at http://arxiv.org/abs/1811.02161' which refers to the paper itself, not the source code for the methodology.
Open Datasets Yes We use 6 standard real-world datasets for three graph learning tasks (a) Heart and Fourclass for BR, (b) Vehicle and Vowel for OR, and (c) House and Mg for FR. 3https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
Dataset Splits No The paper mentions 'fraction of sampled pairs (f)' for experiments and discusses a theoretical 'test set error' (erℓ Sm(f)) in the context of generalization, but it does not specify explicit training/validation/test splits (e.g., percentages or sample counts) for the experimental datasets.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific details about software dependencies, such as names of libraries or solvers with version numbers, needed to replicate the experiments.
Experiment Setup No The paper mentions a 'regularization hyperparameter C > 0' for the SVM formulation but does not provide specific values for this or any other hyperparameters (e.g., learning rate, batch size, number of epochs) used in the experiments.