Exact Inference in High-order Structured Prediction
Authors: Chuyang Ke, Jean Honorio
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our work is highly theoretical. We provide a series of novel definitions and results in this paper: We provide a new class of hypergraph structural properties for high-order inference problems. We derive a novel Cheeger-type inequality... We propose a two-stage approach to solve the problem of high-order structured prediction... We carefully analyze the Karush Kuhn Tucker (KKT) conditions... Additionally, we implement a local-search based, hypergraph label propagation algorithm as a relevant baseline approximation method... We run the first stage of the proposed algorithm on a fourth-order hypergraph generated from the real-world dataset email-Eu-core (Yin et al., 2017; Leskovec et al., 2007). We pick the largest two communities in the network. The generated hypergraph has 200 nodes in total (91 and 109 in each group). We run our algorithm with a tensor projected gradient descent solver on the hypergraph, and exact inference of the 200 labels was achieved perfectly up to permutation of the two groups. |
| Researcher Affiliation | Academia | Department of Computer Science, Purdue University, West Lafayette, IN, USA. Correspondence to: Chuyang Ke <cke@purdue.edu>. |
| Pseudocode | No | The paper describes a two-stage algorithm but does not present it in a pseudocode block or a clearly labeled algorithm section. |
| Open Source Code | No | The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We run the first stage of the proposed algorithm on a fourth-order hypergraph generated from the real-world dataset email-Eu-core (Yin et al., 2017; Leskovec et al., 2007). |
| Dataset Splits | No | The paper mentions using the 'email-Eu-core' dataset but does not specify any training, validation, or test splits. It only states 'The generated hypergraph has 200 nodes in total'. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or memory specifications). |
| Software Dependencies | No | The paper mentions using 'a tensor projected gradient descent solver motivated by Ke & Honorio (2022a); Han (2013)', but it does not specify any software names with version numbers for dependencies (e.g., Python, PyTorch, TensorFlow, or specific library versions). |
| Experiment Setup | Yes | For each setting we run 20 iterations. ... To initialize the propagation, we make the label of five nodes to be visible to the algorithm. The algorithm proceeds as follows: for every unvisited node i, the algorithm will check its interaction with all known nodes repetitively to obtain enough samples (we set the number of repetitions to be 1001 in our script). |