Adaptive Hypergraph Learning for Unsupervised Feature Selection

Authors: Xiaofeng Zhu, Yonghua Zhu, Shichao Zhang, Rongyao Hu, Wei He

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our proposed method outperforms all the comparison methods in terms of clustering tasks.
Researcher Affiliation Academia 1 Guangxi Key Lab of Multi-source Information Mining & Security, China 2 Guangxi Normal University, China 3 Guangxi University, China
Pseudocode No The paper describes the optimization steps and equations but does not present them in a formal pseudocode or algorithm block.
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the described methodology.
Open Datasets Yes In this section, we evaluate our proposed AHLFS with the comparison methods in terms of the clustering accuracy of the clustering tasks, on eight public UCI datasets [Frank et al., 2010], whose detail is listed in Table 1.
Dataset Splits No The paper mentions selecting features (e.g., {20%, ..., 80%}) and repeating k-means clustering 20 times for average results, but does not specify a training/validation/test split for the datasets themselves for model validation.
Hardware Specification No The paper does not provide any specific hardware details used for running the experiments.
Software Dependencies No The paper does not specify any software names with version numbers.
Experiment Setup Yes In our experiments, we set the parameters range as {10^3, 10^2, ..., 10^3} where all the methods can achieve their best results. Our objective function has two tuning parameters, i.e., α and β.