Chaining Mutual Information and Tightening Generalization Bounds
Authors: Amir Asadi, Emmanuel Abbe, Sergio Verdu
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we introduce a technique to combine the chaining and mutual information methods, to obtain a generalization bound that is both algorithm-dependent and that exploits the dependencies between the hypotheses. We provide an example in which our bound significantly outperforms both the chaining and the mutual information bounds. |
| Researcher Affiliation | Academia | 1Princeton University 2EPFL |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code, such as a specific repository link or an explicit code release statement. |
| Open Datasets | No | The paper utilizes a theoretical 'canonical Gaussian process' in its examples, not a publicly available dataset that would typically be used for training machine learning models. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments on datasets that would require training, validation, or test splits. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory used for computations or experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with specific hyperparameters, training configurations, or system-level settings. |