Generalization Analysis for Label-Specific Representation Learning

Authors: Yi-Fan Zhang, Min-Ling Zhang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical However, the generalization analysis of LSRL is still in its infancy. The existing theory bounds for multi-label learning, which preserve the coupling among different components, are invalid for LSRL. In an attempt to overcome this challenge and make up for the gap in the generalization theory of LSRL, we develop a novel vector-contraction inequality and derive the generalization bound for general function class of LSRL with a weaker dependency on the number of labels than the state of the art.
Researcher Affiliation Academia 1 School of Cyber Science and Engineering, Southeast University, Nanjing 210096, China 2 School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 3 Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No This is a purely theoretical work. This paper does not include experiments requiring code.
Open Datasets No This is a purely theoretical work. This paper does not include experiments.
Dataset Splits No This is a purely theoretical work. This paper does not include experiments.
Hardware Specification No This is a purely theoretical work. This paper does not include experiments.
Software Dependencies No This is a purely theoretical work. This paper does not include experiments.
Experiment Setup No This is a purely theoretical work. This paper does not include experiments.