Storage Fit Learning with Unlabeled Data

Authors: Bo-Jian Hou, Lijun Zhang, Zhi-Hua Zhou

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that the proposed methods can fit adaptively different storage budgets and obtain good performances in different scenarios.
Researcher Affiliation Academia National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China {houbj, zhanglj, zhouzh}@lamda.nju.edu.cn
Pseudocode Yes Algorithm 1 So CK
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide links to a code repository.
Open Datasets Yes We run experiments on two large scale UCI data sets, named as adult-a and w8a respectively.
Dataset Splits Yes We randomly pick 1K or 3K examples to use as labeled training examples and regard the remaining ones as unlabeled examples.All the parameters are selected via five-fold cross-validation.
Hardware Specification Yes All the experiments performed on the cores with CPU clocked at 2.53GHz.
Software Dependencies No The paper mentions 'MATLAB' but does not provide any specific version numbers for software or libraries used in the experiments.
Experiment Setup Yes According to [Chapelle et al., 2002], σ of Gaussian kernel is set to 0.55 for both proposed algorithms as well as Clus K.Besides, Nys CK does not appear much faster than So CK because we set the number of iterations of So CK to a very small value, say, 20 to decrease the running time.