Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Probabilistic Symmetries and Invariant Neural Networks

Authors: Benjamin Bloem-Reddy, Yee Whye Teh

JMLR 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Treating neural network inputs and outputs as random variables, we characterize the structure of neural networks that can be used to model data that are invariant or equivariant under the action of a compact group. Much recent research has been devoted to encoding invariance under symmetry transformations into neural network architectures, in an effort to improve the performance of deep neural networks in data-scarce, non-i.i.d., or unsupervised settings. By considering group invariance from the perspective of probabilistic symmetry, we establish a link between functional and probabilistic symmetry, and obtain generative functional representations of probability distributions that are invariant or equivariant under the action of a compact group. Our representations completely characterize the structure of neural networks that can be used to model such distributions and yield a general program for constructing invariant stochastic or deterministic neural networks.
Researcher Affiliation Academia Benjamin Bloem-Reddy EMAIL Department of Statistics University of British Columbia Vancouver V6T 1Z4, Canada Yee Whye Teh EMAIL Department of Statistics University of Oxford Oxford OX1 3LB, United Kingdom
Pseudocode No The paper includes a section 5.1 titled "A Program for Obtaining Symmetric Functional Representations" which outlines steps, but it is a high-level conceptual description and not formatted as structured pseudocode or an algorithm block.
Open Source Code No The paper mentions "open-source software like Theano (Theano Development Team, 2016), Tensorflow (Abadi et al., 2015), and Py Torch (Paszke et al., 2019)" as tools in the field, but does not provide any link or explicit statement about releasing code for the methodology described in this paper.
Open Datasets No The paper is theoretical and develops a framework for designing invariant neural networks. It discusses examples from the literature (e.g., Deep Sets, Set Transformer) that might use specific datasets, but the paper itself does not conduct experiments with data or provide access information to any dataset it uses for its own work.
Dataset Splits No The paper is theoretical and does not conduct experiments with datasets, therefore it does not provide information about training/test/validation splits.
Hardware Specification No The paper is theoretical and does not conduct experiments, therefore it does not specify any hardware used for running experiments.
Software Dependencies No The paper is theoretical and does not conduct experiments, therefore it does not provide specific version numbers for software dependencies needed to replicate an experiment. It mentions general open-source software like Theano, Tensorflow, and PyTorch in the introduction as context for the field, but not as dependencies for its own work.
Experiment Setup No The paper is theoretical and does not conduct experiments, therefore it does not describe any experimental setup details such as hyperparameters or system-level training settings.