Pareto Optimal Streaming Unsupervised Classification
Authors: Soumya Basu, Steven Gutstein, Brent Lance, Sanjay Shakkottai
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we validate our theoretical results using ensembles comprised of Alexnet (Krizhevsky et al., 2012), VGG (Simonyan & Zisserman, 2014) and Resnet (He et al., 2016) deep convolutional neural nets. We perform experiments on modified Cifar-10 datasets. |
| Researcher Affiliation | Collaboration | Soumya Basu 1 Steven Gutstein 2 Brent Lance 2 Sanjay Shakkottai 1 1The University of Texas at Austin, USA 2Army Research Lab, USA. |
| Pseudocode | Yes | Algorithm 1 Max-weight with Bayesian Departure Algorithm 2 Online Spectral Learner |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | Finally, we validate our theoretical results using ensembles comprised of Alexnet (Krizhevsky et al., 2012), VGG (Simonyan & Zisserman, 2014) and Resnet (He et al., 2016) deep convolutional neural nets. We perform experiments on modified Cifar-10 datasets. |
| Dataset Splits | No | The paper mentions using 'modified Cifar-10 datasets' but does not specify training, validation, or test splits (e.g., percentages, sample counts, or predefined split references). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions using models like Alexnet, VGG, and Resnet, but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | No | The paper describes the system model and algorithms but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or training configurations. |