Discrete Chebyshev Classifiers
Authors: Elad Eban, Elad Mezuman, Amir Globerson
ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results show that the method is competitive with other approaches that use the same input. |
| Researcher Affiliation | Academia | Elad Eban* ELADE@CS.HUJI.AC.IL Elad Mezuman ELAD.MEZUMAN@MAIL.HUJI.AC.IL Amir Globerson* GAMIR@CS.HUJI.AC.IL Edmond and Lily Safra Center for Brain Sciences. The Hebrew University of Jerusalem The Selim and Rachel Benin School of Computer Science and Engineering. The Hebrew University of Jerusalem |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific links to open-source code or explicitly state that the code will be made publicly available. |
| Open Datasets | Yes | We tested the DCC classifier scheme on 12 classification datasets from the UCI repository |
| Dataset Splits | Yes | Each synthetic trial contained 5,000 examples divided equally between train and test sets. The results reported are the average over 10 random generations of the data. The error rates reported in Table 1 are the average of 5 partitions into train and test sets. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | No | The paper describes general experimental settings like dataset splits and comparisons, but does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs, optimizer settings) or detailed system-level training configurations for the DCC method itself. |