Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Universal Approximation of Functions on Sets

Authors: Edward Wagstaff, Fabian B. Fuchs, Martin Engelcke, Michael A. Osborne, Ingmar Posner

JMLR 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We provide a theoretical analysis of Deep Sets which shows that this universal approximation property is only guaranteed if the model s latent space is sufficiently high-dimensional. If the latent space is even one dimension lower than necessary, there exist piecewise-affine functions for which Deep Sets performs no better than a na ıve constant baseline, as judged by worst-case error. ... In this work, we contribute to this theoretical understanding by considering how the dimension of a Deep Sets model s latent space affects its expressive capacity. ... This work provides a theoretical characterisation of the representation and approximation of permutation-invariant functions.
Researcher Affiliation Academia Edward Wagstaff EMAIL Fabian B. Fuchs EMAIL Martin Engelcke EMAIL Michael A. Osborne EMAIL Ingmar Posner EMAIL Department of Engineering Science University of Oxford Oxford, UK
Pseudocode No The paper primarily presents theoretical analysis, proofs, and mathematical discussions of function representation and approximation. It does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly state that source code for the described methodology is available, nor does it provide a link to a code repository. The paper focuses on theoretical analysis rather than empirical implementation.
Open Datasets No The paper is theoretical in nature, focusing on mathematical properties of functions on sets. It does not use or make available any specific real-world or synthetic datasets for empirical evaluation. The discussions involve abstract mathematical spaces such as RM or [0, 1]M.
Dataset Splits No The paper is a theoretical work and does not perform experiments on specific datasets. Therefore, it does not provide any information regarding training, test, or validation dataset splits.
Hardware Specification No This paper presents a theoretical analysis and does not involve empirical experiments requiring computational hardware. Therefore, no hardware specifications are mentioned.
Software Dependencies No The paper is a theoretical study focused on mathematical proofs and analysis. It does not describe any implemented software or require specific software dependencies with version numbers for experimental reproduction.
Experiment Setup No The paper is purely theoretical, providing a mathematical characterization of functions on sets. It does not describe any empirical experiments, hyperparameters, training configurations, or system-level settings.