Learning and Data Selection in Big Datasets

Authors: Hossein Shokri Ghadikolaei, Hadi Ghauch, Carlo Fischione, Mikael Skoglund

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical evaluations of real datasets reveal a large compressibility, up to 95%, without a noticeable drop in the learnability performance, measured by the generalization error.
Researcher Affiliation Academia 1School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden 2COMELEC Department, Telecom Paris Tech, Paris, France.
Pseudocode Yes Algorithm 1 Alternating Data Selection and Function Approximation (DF)
Open Source Code No The paper does not provide concrete access to its source code. It only references third-party tools and public datasets.
Open Datasets Yes Table 1: Databases for regression task. d is the input dimension. Database # Training samples # Test samples d Bodyfat 168 84 14 Housing 337 169 13 Space-ga 2,071 1,036 6 Year Prediction MSD 463,715 51,630 90 Power Consumption 1,556,445 518,814 9 Stat Lib Repository. [Online] http://lib.stat.cmu.edu/datasets/, Accessed: 2019-01-22. UCI Machine Learning Repository. [Online] http://mlr.cs.umass.edu/ml, Accessed: 2019-01-22.
Dataset Splits Yes Table 1: Databases for regression task. d is the input dimension. Database # Training samples # Test samples d Bodyfat 168 84 14 Housing 337 169 13 Space-ga 2,071 1,036 6 Year Prediction MSD 463,715 51,630 90 Power Consumption 1,556,445 518,814 9
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies Yes Grant, M. and Boyd, S. CVX: Matlab software for disciplined convex programming, version 2.1, March 2014. [Online] http://cvxr.com/cvx, Accessed: 201901-22.
Experiment Setup No The paper describes the model architecture and regularization (Tikonov regularization with parameter λ) but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings.