Graph Resistance and Learning from Pairwise Comparisons

Authors: Julien Hendrickx, Alexander Olshevsky, Venkatesh Saligrama

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 3. Experiments The purpose of this section is two-fold. First, we would like to demonstrate that simulations are consistent with Theorem 1; in particular, we would like to see error scalings that are consistent with the average resistance, rather than e.g., spectral gap. Second, we wish observe that, although our results are asymptotic, in practice the scaling with resistance appears immediately, even for small k. Since our main contribution is theoretical, and since we do not claim that our algorithm is better than available methods in practice, we do not perform a comparison to other methods in the literature. Additional details about our experiments are provided in Section ?? in the Supplementary Information. We begin with Erdos-Renyi comparison graphs. Figure 1 shows the evolution of the error with the number k of comparisons per edge. The error decreases as O(1/ k) as predicted. Moreover, this is already the case for small values of k. Next we move to the influence of the graph properties. Figure 2 shows that the average error is asymptotically constant when n grows while keeping the expected degree d := (n 1)p constant, and that it decreases as O(1/ d) when the expected degree grows while keeping n constant. This is consistent with our analysis in Table 1, and with the results (Boumal & Cheng, 2014) showing that the average resistance Ωavg of Erdos-Renyi graphs evolves as O(1/d). We next consider lattice graphs in Figure 3. For the 3D lattice, the error appears to converge to a constant when n grows, which is consistent with our results since the average resistance of 3D lattice is bounded independently of n. The trend for the 2D lattice appears also consistent with a bound in O( log n) predicted by our results since the resistance on 2D lattice evolves as O(log n).
Researcher Affiliation Academia Department of Mathematical Engineering, ICTEAM, UCLouvain, Belgium Department of Electrical and Computer Engineering, Boston University, USA
Pseudocode No The paper describes the algorithm using mathematical equations like log ˆW = L B log R (5) and explains how it can be solved. However, it does not present this as a formal pseudocode block or algorithm steps with typical pseudocode keywords.
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository.
Open Datasets No The paper mentions generating Erdos-Renyi comparison graphs and lattice graphs for simulations, but it does not specify any publicly available datasets or provide concrete access information to any dataset (such as links, DOIs, or specific citations to established public datasets used in the experiments). The data is generated for simulation purposes.
Dataset Splits No The paper describes simulations and error evolution but does not specify any training, validation, or test dataset splits. It seems the data is generated for each simulation run rather than using fixed splits of a pre-existing dataset.
Hardware Specification No The paper does not provide any specific hardware details (e.g., CPU, GPU models, memory, etc.) used for running the experiments or simulations.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup No The paper describes the simulation parameters for the graphs (e.g., "k = 100 comparisons per edge", "b = 5", "Ntest = 100 tests"), but these are parameters of the simulation setup itself and not hyperparameters or system-level training settings of a machine learning model in the usual sense (e.g., learning rate, batch size, optimizer). While it provides some context for reproducibility, it lacks typical experimental setup details for ML.