Efficient Multi-view Unsupervised Feature Selection with Adaptive Structure Learning and Inference
Authors: Chenglong Zhang, Yang Fang, Xinyan Liang, Han Zhang, Peng Zhou, Xingyu Wu, Jie Yang, Bingbing Jiang, Weiguo Sheng
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, six real-word datasets are employed, including flower-171, Leaves2, NUS3, Scene4, ALOI5 and Youtube6. The details of each dataset are listed in Table 2. To comprehensively verify the superiority and effectiveness of EMUFS, we conduct experiments with six state-of-the-art competitors, including (1) Unsupervised Feature Selection with Structured Graph Optimization (SOGFS) [Nie et al., 2016]; (2) Multi View Clustering and Feature Learning via Structured Sparsity (MVCSS) [Wang et al., 2013]; (3) Multi-view Unsupervised Feature Selection with Adaptive Similarity and View Weight (ASVW) [Hou et al., 2017]; (4) Multi-view Feature Selection via Nonnegative Structured Graph Learning (NSGL) [Bai et al., 2020]; (5) Multilevel Projections with Adaptive Neighbor Graph for Unsupervised Multi-View Feature Selection (MAMFS) [Zhang et al., 2021]; (6) Robust Unsupervised Feature Selection via Multi-Group Adaptive Graph Representation (MGAGR) [You et al., 2023]. To ensure comparison fairness, the parameters of all competitors are tuned following their respective works. The regularization parameters for EMUFS are searched in a grid of {10 3, 10 2, , 103}, with the number of anchors set as m = 10% n. The Kmeans clustering is independently executed 20 times on the selected feature subsets, and the average results, including the clustering accuracy (ACC) and the normalized mutual information (NMI), are reported to evaluate the performance. |
| Researcher Affiliation | Academia | 1Hangzhou Normal University, Hangzhou, China 2Chongqing University of Posts and Telecommunications, Chongqing, China 3Shanxi University, Taiyuan, China 4Northwestern Polytechnical University, Xi an, China 5Anhui University, Hefei, China 6Hong Kong Polytechnic University, Hong Kong SAR, China 7University of Technology Sydney, NSW, Australia |
| Pseudocode | Yes | Algorithm 1 Optimization procedures for EMUFS |
| Open Source Code | No | The paper does not contain any explicit statement about making the source code for EMUFS publicly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | In this section, six real-word datasets are employed, including flower-171, Leaves2, NUS3, Scene4, ALOI5 and Youtube6. The details of each dataset are listed in Table 2. 1https://www.robots.ox.ac.uk/ vgg/data/flowers/ 2https://archive.ics.uci.edu/dataset/ 3https://lms.comp.nus.edu.sg/wpcontent/uploads/2019/research/nuswide/NUS-WIDE.html 4http://people.csail.mit.edu/torralba/code/spatialenvelope/ 5https://aloi.science.uva.nl/ 6https://archive.ics.uci.edu/dataset/269/ |
| Dataset Splits | No | The paper uses datasets and evaluates performance, and mentions "The Kmeans clustering is independently executed 20 times on the selected feature subsets", and "running times versus the training sample scale", but it does not provide explicit details about train/validation/test dataset splits or cross-validation setup. |
| Hardware Specification | No | The paper does not specify any hardware details (like specific CPU/GPU models or memory) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, frameworks) used for implementation. |
| Experiment Setup | Yes | The regularization parameters for EMUFS are searched in a grid of {10 3, 10 2, , 103}, with the number of anchors set as m = 10% n. |