Re-revisiting Learning on Hypergraphs: Confidence Interval and Subgradient Method

Authors: Chenzi Zhang, Shuguang Hu, Zhihao Gavin Tang, T-H. Hubert Chan

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on real-world datasets confirm that our confidence interval approach on hypergraphs outperforms existing methods, and our sub-gradient method gives faster running times when the number of vertices is much larger than the number of edges. (...) 4. Experimental Results on Real-World Datasets. In Section 5, we revisit some datasets in the UCI Machine Learning Repository (Lichman, 2013), and experiments confirm that our prediction model based on confidence interval gives better accuracy than that in (Hein et al., 2013).
Researcher Affiliation Academia 1University of Hong Kong, Hong Kong. 2This research was partially supported by the Hong Kong RGC under the grant 17200214.
Pseudocode Yes Algorithm 1 Semi-Supervised Learning (...) Algorithm 2 Confidence Intervals for Undirected Hypergraphs (...) Algorithm 3 Estimate confidence interval (...) Algorithm 4 Markov Operator M : RV RN (...) Algorithm 5 Subgradient Method SGM(f (0) N RN)
Open Source Code No The paper does not include an unambiguous statement where the authors release the code for the work described, nor does it provide a direct link to a source-code repository.
Open Datasets Yes In Section 5, we revisit some datasets in the UCI Machine Learning Repository (Lichman, 2013)
Dataset Splits No The paper describes how labeled and unlabeled vertices are selected ('randomly pick l vertices from the dataset to form the set L and treat the rest as unlabeled vertices') and mentions 5-fold cross-validation for a baseline method (Hein et al.), but it does not provide explicit training/validation/test dataset splits with percentages or counts for its own method's experimental setup.
Hardware Specification No The paper only states 'Our experiments are run on a standard PC.', which does not provide specific hardware details like GPU/CPU models or memory.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes Empirically, the step size ηt := 1/(t+1) min(0.16t/10^5,1) gives good performance. For PDHG, We choose σ = τ = 1/(1+d), where d is the maximum degree.