Ranking Preserving Nonnegative Matrix Factorization

Authors: Jing Wang, Feng Tian, Weiwei Liu, Xiao Wang, Wenjie Zhang, Kenji Yamanishi

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results with several datasets for clustering and classification have demonstrated that RPNMF achieves greater performance against the state-of-the-arts, not only in terms of accuracy, but also interpretation of orderly data structure.
Researcher Affiliation Academia 1 Graduate School of Information Science and Technology, The University of Tokyo, Japan 2 Faculty of Science and Technology, Bournemouth University, UK 3 School of Computer Science and Engineering, The University of New South Wales, Australia 4 School of Computer Science, Beijing University of Posts and Telecommunications, China
Pseudocode No The paper presents updating rules as mathematical equations (11), (12), (15), (16), (17) but does not provide a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide any concrete access information for open-source code for the described methodology.
Open Datasets Yes The Yale [Liu et al., 2012] contains 11 face images for each of 15 subjects. The ORL [Liu et al., 2012] consists of 400 face images of 40 different subjects. The Coil20 [Wang et al., 2017b]is composed of 1440 images for 20 objects. The NHill [Wang et al., 2017b] is a face dataset sampled from the movie Notting Hill. The Cartoon [Wang et al., 2017a] is a video sequence extracted from a short animation available online, which has 282 frames of three scenes. The Hdm05 is a motion capture dataset. As in [Wang et al., 2017a], we chose the scene 1-1 which contains 9842 frames and 14 activities.
Dataset Splits Yes Since k-means is sensitive to initial values, we repeated the clustering 50 times, each with a new set of initial centroid. Moreover, since all the compared methods converge to local minimum, we ran each method 10 times to avoid randomness. For each dataset, 80% data from each class was randomly selected as training dataset and the rest as testing dataset.
Hardware Specification Yes All the experiments were done using Matlab 2014 in an Intel Core 3.50GHZ desktop.
Software Dependencies Yes All the experiments were done using Matlab 2014 in an Intel Core 3.50GHZ desktop. Similar to [Liu and Tsang, 2017; Liu et al., 2017a], the LIBLINEAR package [Fan et al., 2008] was used to train the classifiers.
Experiment Setup Yes For RPNMF, we varied the regularization parameter α and δ within {0.0001, 0.001, 0.01, 0.1, 1} and {0.001, 0.01, 0.1, 1, 10, 100}, respectively. To construct ordinal relations for t-STE and RPNMF, we first randomly selected 10% data for each dataset, and then constructed 30 ordinal relations for each selected data as in [Chang et al., 2014]. Since k-means is sensitive to initial values, we repeated the clustering 50 times, each with a new set of initial centroid. Moreover, since all the compared methods converge to local minimum, we ran each method 10 times to avoid randomness.