Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Universal Approximation Property of Invertible Neural Networks

Authors: Isao Ishikawa, Takeshi Teshima, Koichi Tojo, Kenta Oono, Masahiro Ikeda, Masashi Sugiyama

JMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical To answer this question, we have developed a general theoretical framework to investigate the representation power of INNs, building on a structure theorem of differential geometry. The framework simplifies the approximation problem of diffeomorphisms, which enables us to show the universal approximation properties of INNs.
Researcher Affiliation Academia Isao Ishikawa EMAIL Center for Data Science Ehime University Ehime, JAPAN Center for Advanced Intelligence Project RIKEN Tokyo, JAPAN Takeshi Teshima EMAIL The University of Tokyo Tokyo, JAPAN Center for Advanced Intelligence Project RIKEN Tokyo, JAPAN Koichi Tojo EMAIL Center for Advanced Intelligence Project RIKEN Tokyo, JAPAN Kenta Oono EMAIL Center for Advanced Intelligence Project RIKEN Tokyo, JAPAN Masahiro Ikeda EMAIL Center for Advanced Intelligence Project RIKEN Tokyo, JAPAN Masashi Sugiyama EMAIL Center for Advanced Intelligence Project RIKEN Tokyo, JAPAN The University of Tokyo Tokyo, JAPAN
Pseudocode No The paper is a theoretical work focusing on mathematical proofs and definitions; it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements regarding the release of source code or links to a code repository.
Open Datasets No This theoretical paper focuses on the universal approximation properties of invertible neural networks and does not present any empirical studies or experiments that would involve the use of datasets.
Dataset Splits No As this is a theoretical paper without empirical experiments, there is no mention of dataset splits for training, validation, or testing.
Hardware Specification No This theoretical paper does not describe any experiments or computational implementations, therefore, it does not specify any hardware used.
Software Dependencies No This theoretical paper does not describe any software implementation or dependencies with specific version numbers.
Experiment Setup No As this is a theoretical paper, it does not include details on experimental setup, hyperparameters, or training configurations.