Task Relatedness-Based Generalization Bounds for Meta Learning

Authors: Jiechao Guan, Zhiwu Lu

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical A novel generalization bound of O( 1 mn) for meta learning is thus derived by exploiting the proposed task relatedness. Moreover, when investigating a special branch of meta learning that involves representation learning with deep neural networks, we establish spectrally-normalized bounds for both classification and regression problems. Finally, we demonstrate that the relatedness requirement between two tasks is satisfied when the sample space possesses the completeness and separability properties, validating the rationality and applicability of our proposed task-relatedness measure.
Researcher Affiliation Academia Jiechao Guan School of Information, Renmin University of China, Beijing, China 2014200990@ruc.edu.cn Zhiwu Lu Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China Beijing Key Laboratory of Big Data Management and Analysis Methods, Beijing, China luzhiwu@ruc.edu.cn
Pseudocode No The paper focuses on theoretical derivations, proofs, and analyses, and does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing source code or provide any links to a code repository.
Open Datasets No The paper is theoretical and does not refer to specific datasets used for training or their public availability. It mentions 'training tasks' and 'training data' in a conceptual, not experimental, context.
Dataset Splits No The paper is theoretical and does not provide specific dataset split information for validation or any other purpose.
Hardware Specification No The paper is theoretical and does not describe any hardware specifications used for experiments.
Software Dependencies No The paper is theoretical and does not describe any specific software dependencies or their version numbers.
Experiment Setup No The paper is theoretical and does not describe specific experimental setup details, hyperparameters, or training configurations.