Tight Continuous Relaxation of the Balanced k-Cut Problem

Authors: Syama Sundar Rangapuram, Pramod Kaushik Mudrakarta, Matthias Hein

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive comparisons show that our method outperforms all existing approaches for ratio cut and other balanced k-cut criteria. Extensive experiments show that our method outperforms all existing methods in terms of the achieved balanced k-cuts.
Researcher Affiliation Academia Syama Sundar Rangapuram, Pramod Kaushik Mudrakarta and Matthias Hein Department of Mathematics and Computer Science Saarland University, Saarbr ucken
Pseudocode No The paper describes the algorithm and its steps through narrative text and mathematical formulations, but it does not include a clearly labeled pseudocode block or algorithm section within the main body.
Open Source Code No The paper states 'We used the publicly available code [22, 13] with default settings.', which refers to code for other methods used for comparison, not the authors' own implementation.
Open Datasets Yes In addition to the datasets provided in [13], we also selected a variety of datasets from the UCI repository shown below. For all the datasets not in [13], symmetric k-NN graphs are built with Gaussian weights... Iris wine vertebral ecoli 4moons webkb4 optdigits USPS pendigits 20news MNIST
Dataset Splits No The paper does not explicitly state specific percentages, sample counts, or a detailed methodology for splitting data into training, validation, and test sets. It mentions 'train' and 'test' in context but does not provide explicit validation split information.
Hardware Specification No The paper does not explicitly specify any hardware details such as CPU/GPU models, memory, or specific computing environments used for running the experiments.
Software Dependencies No The paper mentions using specific algorithms like 'PDHG algorithm [15, 16]' and refers to 'publicly available code [22, 13]' for comparative methods, but it does not provide specific software dependencies or version numbers for its own implementation (e.g., programming languages, libraries, or solvers with version numbers).
Experiment Setup Yes We run our method using 5 random initializations, 7 initializations based on the spectral clustering solution similar to [13] (who use 30 such initializations). In addition to the datasets provided in [13]... We chose the parameters s and k in a method independent way by testing for each dataset several graphs using all the methods over different choices of k {3, 5, 7, 10, 15, 20, 40, 60, 80, 100} and s {0.1, 1, 4}. The best choice in terms of the clustering error across all the methods and datasets, is s = 1, k = 15.