Robust Graph Matching when Nodes are Corrupt
Authors: Taha Ameen, Bruce Hajek
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our simulations suggest that there is a gap between these fundamental limits and the performance of commonly used computationally feasible algorithms: Section 5 provides details. Figure 1 compares the asymptotic guarantee of the k-core estimator against simulation results for the following estimators. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, and the Coordinated Science Laboratory, University of Illinois Urbana Champaign, Urbana, IL 61801, USA. |
| Pseudocode | Yes | Algorithm 1 Adversary A |
| Open Source Code | No | The paper mentions code availability for other works it compares against (e.g., GRAMPA, DEGREE PROFILING) but does not provide source code for its own described methodology (the analysis of k-core or maximum overlap estimators in the context of node corruption). |
| Open Datasets | No | The paper focuses on theoretical models, specifically "correlated Erd os-Rényi graphs," and mentions applications to real-world networks like PPI networks. However, it does not use a specific publicly available or open dataset for its experiments or analysis. |
| Dataset Splits | No | The paper's analysis is primarily theoretical and involves simulations of graph models (Erdos-Renyi graphs) rather than experiments on specific datasets with defined training, validation, and test splits. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for its simulations or analysis. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers for its analysis or simulations. |
| Experiment Setup | No | While the paper discusses simulations and theoretical analysis, it does not provide specific details about the experimental setup such as hyperparameters, batch sizes, or training schedules, as it focuses on theoretical properties and limits rather than training a machine learning model. |