On Graph Reconstruction via Empirical Risk Minimization: Fast Learning Rates and Scalability
Authors: Guillaume Papa, Aurélien Bellet, Stephan Clémençon
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we illustrate our theoretical results by numerical experiments on synthetic and real graphs. |
| Researcher Affiliation | Academia | Guillaume Papa, Stéphan Clémençon LTCI, CNRS, Télécom Paris Tech, Université Paris-Saclay 75013, Paris, France first.last@telecom-paristech.fr Aurélien Bellet INRIA 59650 Villeneuve d Ascq, France aurelien.bellet@inria.fr |
| Pseudocode | No | The paper does not contain any structured pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating the availability of open-source code for the methodology described. |
| Open Datasets | No | The paper describes generating its own synthetic graph data ('We create a synthetic graph with n nodes as follows.') and mentions 'experiments on a real network' in the Supplementary Material section, but it does not provide concrete access information (e.g., a specific link, DOI, or a formal citation with authors and year for a publicly available dataset) for either the synthetic or real data. |
| Dataset Splits | No | The paper describes generating a 'training graph' and a 'test graph' separately but does not specify explicit percentages or sample counts for training/validation/test splits from a single dataset, nor does it mention cross-validation. The phrase 'dataset splitting strategy given by (6)' refers to an alternative empirical risk estimation method, not necessarily their experimental setup's data partitioning. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running its experiments (e.g., GPU/CPU models, memory, or cloud resources). |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., libraries, frameworks, or solvers). |
| Experiment Setup | Yes | We create a synthetic graph with n nodes as follows. Each node i has features Xtrue i Rq sampled uniformly over [0, 1]...Using this procedure, we generate a training graph with n = 1,000,000 and q = 100. We set the threshold τ such that there is an edge between about 20% of the node pairs, and set p = 0.05...Table 1 shows the test error (averaged over 10 runs)... |