Scalable Normalized Cut with Improved Spectral Rotation

Authors: Xiaojun Chen, Feiping Nie, Joshua Zhexue Huang, Min Yang

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A series of experimental were conducted on 14 benchmark data sets and the experimental results show the superior performance of the new method.
Researcher Affiliation Collaboration 1College of Computer Science and Software, Shenzhen University, Shenzhen 518060, P.R. China 2School of Computer Science and Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi an 710072, P. R. China 3Tencent AI Lab, Shenzhen, P.R. China
Pseudocode Yes Algorithm 1 Improved Spectral Rotation (ISR) to solve problem (9); Algorithm 2 Scalable Normalized Cut (SNC) to solve problem (22)
Open Source Code No The paper does not provide concrete access to source code (e.g., a specific repository link or explicit code release statement) for the methodology described.
Open Datasets Yes 8 benchmark data sets were selected from the UCI Machine Learning Repository and Feiping Nie s page 1.; 6 large scale benchmark data sets were selected from the UCI Machine Learning Repository and Feiping Nie s page 1. (footnote: http://www.escience.cn/people/fpnie/ index.html#)
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, or citations to predefined splits) for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes For each data set, we set five neighborhood parameters k = {10, 20, . . . , 50} to construct five affinity matrices with the method in [Nie et al., 2016]... For each data set, we used the same clustering result for anchors generation in KASP, CSC, LSC and SNC where 10 numbers were selected for m. The neighborhood parameters were set as {10, 20, . . . , 50} for all data sets. We used the Gaussian kernel to compute similarities for all methods excluding SNC, where the parameter h was set as the average distance between two points in the data set (used in [Cai and Chen, 2015]).