Smoothed Online Classification can be Harder than Batch Classification

Authors: Vinod Raman, Unique Subedi, Ambuj Tewari

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our abstract and introduction state that (i) we show that smoothed online classification can be harder than batch classification and (ii) we provide a sufficiency condition for smoothed online classification. The first claim is proven in Section 3 of the paper and the sufficiency condition is proven in Section 4. Guidelines: The answer NA means that the abstract and introduction do not include the claims made in the paper. The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
Researcher Affiliation Academia Vinod Raman Department of Statistics University of Michigan Ann Arbor, MI 48104 vkraman@umich.edu Unique Subedi Department of Statistics University of Michigan Ann Arbor, MI 48104 subedi@umich.edu Ambuj Tewari Department of Statistics University of Michigan Ann Arbor, MI 48104 tewaria@umich.edu
Pseudocode No The paper provides theoretical proofs and discussions but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, figures, or structured code-like representations of methods.
Open Source Code No The paper is theoretical and does not mention releasing any source code for its methodology. The NeurIPS checklist states 'The paper does not include experiments requiring code.'
Open Datasets No The paper is theoretical and does not conduct empirical experiments using specific datasets for training. It defines abstract spaces (X, Y) and measures (ยต) for theoretical constructions, but not concrete public datasets. The NeurIPS checklist confirms 'The paper does not include experiments.'
Dataset Splits No The paper is theoretical and does not involve empirical experiments with training, validation, or test dataset splits. The NeurIPS checklist states 'The paper does not include experiments.'
Hardware Specification No The paper is theoretical and does not report on experiments that would require specific hardware. The NeurIPS checklist states 'The paper does not include experiments.'
Software Dependencies No The paper is theoretical and does not report on experiments that would require specific software dependencies with version numbers. The NeurIPS checklist states 'The paper does not include experiments.'
Experiment Setup No The paper is theoretical and does not describe any empirical experimental setup, including hyperparameters or system-level training settings. The NeurIPS checklist states 'The paper does not include experiments.'