Finding Bipartite Components in Hypergraphs

Authors: Peter Macgregor, He Sun

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We theoretically prove the performance of our proposed algorithm, and compare it against the previous state-of-the-art through extensive experimental analysis on both synthetic and real-world datasets.
Researcher Affiliation Academia Peter Macgregor School of Informatics University of Edinburgh peter.macgregor@ed.ac.uk He Sun School of Informatics University of Edinburgh h.sun@ed.ac.uk
Pseudocode Yes Algorithm 1: FINDBIPARTITECOMPONENTS Input :Hypergraph H, starting vector f0 Rn, step size ϵ > 0 Output :Sets L and R
Open Source Code Yes Our code can be downloaded from https://github.com/pmacg/hypergraph-bipartite-components.
Open Datasets Yes In particular, on the well-known Penn Treebank corpus that contains 49, 208 sentences and over 1 million words... The Penn Treebank dataset is an English-language corpus... [22] and We construct a hypergraph from a subset of the DBLP network consisting of 14, 376 papers published in artificial intelligence and machine learning conferences [12, 32].
Dataset Splits No The paper describes the generation of synthetic datasets and the structure of real-world datasets, but it does not specify how these datasets were split into training, validation, or test sets for model training or evaluation in a reproducible manner.
Hardware Specification Yes The experiments are performed using an Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz processor, with 16 GB RAM.
Software Dependencies Yes All algorithms are implemented in Python 3.6, using the scipy library for sparse matrix representations and linear programs.
Experiment Setup Yes We always set the parameter ϵ = 1 for FBC and FBCA, and we set the starting vector f0 Rn for the diffusion to be the eigenvector corresponding to the minimum eigenvalue of JG, where G is the clique reduction of the hypergraph H.