Transductive Learning is Compact

Authors: Julian Asilis, Siddartha Devic, Shaddin Dughmi, Vatsal Sharan, Shang-Hua Teng

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical All our results are theoretical, and stated with their full set of required assumptions.
Researcher Affiliation Academia Julian Asilis USC asilis@usc.edu Siddartha Devic USC devic@usc.edu Shaddin Dughmi USC shaddin@usc.edu Vatsal Sharan USC vsharan@usc.edu Shang-Hua Teng USC shanghua@usc.edu
Pseudocode No The paper contains theoretical proofs and theorems but no pseudocode or algorithm blocks are explicitly presented.
Open Source Code No The paper does not include any experiments requiring code. (NeurIPS Paper Checklist)
Open Datasets No The paper does not include any experiments. (NeurIPS Paper Checklist)
Dataset Splits No The paper does not include any experiments. (NeurIPS Paper Checklist)
Hardware Specification No The paper does not include any experiments. (NeurIPS Paper Checklist)
Software Dependencies No The paper does not include any experiments. (NeurIPS Paper Checklist)
Experiment Setup No The paper does not include any experiments. (NeurIPS Paper Checklist)