On Stronger Computational Separations Between Multimodal and Unimodal Machine Learning

Authors: Ari Karchmer

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we give a stronger average-case computational separation, where for typical instances of the learning task, unimodal learning is computationally hard, but multimodal learning is easy.Under the low-noise LPN assumption, there exists an average-case bimodal learning task that can be completed in polynomial time, and a corresponding average-case unimodal learning task that cannot be completed in polynomial time.
Researcher Affiliation Academia Department of Computer Science, Boston University, Boston, MA, USA. Correspondence to: Ari Karchmer <arika@bu.edu>.
Pseudocode Yes Algorithm 1 Aµ and Algorithm 2 Protocol 1
Open Source Code No The paper does not contain an explicit statement about the release of source code for the methodology described, nor does it provide any links to a code repository.
Open Datasets No The paper focuses on theoretical constructions of learning tasks and computational separations, not empirical studies using publicly available datasets for training. No specific dataset access information is provided.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe experimental procedures that would require specific hardware. No hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not mention any specific software dependencies or their version numbers required for replication.
Experiment Setup No The paper is theoretical and does not describe an empirical experimental setup with details such as hyperparameters or training configurations.