Approximate Large-scale Multiple Kernel k-means Using Deep Neural Network

Authors: Yueqing Wang, Xinwang Liu, Yong Dou, Rongchun Li

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our algorithm consumes less time than most comparatively similar algorithms, while it achieves comparable performance with MKC algorithms.
Researcher Affiliation Academia Yueqing Wang, Xinwang Liu , Yong Dou, Rongchun Li National Laboratory for Parallel and Distributed Processing, NUDT, Changsha, China, 410073 xinwangliu@nudt.edu.cn
Pseudocode Yes Algorithm 1: Training stage
Open Source Code No The paper does not provide any explicit statement or link regarding the availability of its source code.
Open Datasets Yes We evaluate our algorithm on eight datasets detailed in Table 1. ... such as mnist10k, cifar100-10k, Oxford 102 Category Flowers (102flowers), and birds200... Caltech256, cifar100, mnist, and Image Net... For all datasets, we use two 4096-dimensional features extracted using the Alexnet model [Krizhevsky et al., 2012] and Visual Geometry Group-19 (VGG19) model [Simonyan and Zisserman, 2014] to represent the images.
Dataset Splits No The paper mentions a 'training stage' using a sampled 'subset' and a 'testing stage' applying the trained network to the 'whole dataset', but it does not specify explicit train/validation/test dataset splits with proportions or counts for reproducible evaluation.
Hardware Specification Yes All the algorithms reported this paper are performed on a workstation with a 32-core Intel E5-2650 2.00 GHz processor and 256 GB memory.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library names with versions).
Experiment Setup Yes Our network includes one 1D convolutional layer, one max pooling layer, and four fully connected layers. ... The corresponding loss function of our network can be written as follows: JH(θ) = fθ(X) Hsub 2 + θ F , ... To regress Hsub, we use stochastic gradient descent (SGD) method to minimize the loss function.