Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

On Contrastive Representations of Stochastic Processes

Authors: Emile Mathieu, Adam Foster, Yee Teh

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we show that our methods are effective for learning representations of periodic functions, 3D objects and dynamical processes.
Researcher Affiliation Collaboration Emile Mathieu , Adam Foster , Yee Whye Teh , EMAIL, Department of Statistics, University of Oxford, United Kingdom Deep Mind, United Kingdom
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes Our code is publicly available at github.com/ae-foster/cresp.
Open Datasets Yes We apply CRESP to Shape Net (Chang et al., 2015), a standard dataset in the ๏ฌeld of 3D object representations.
Dataset Splits No The paper mentions 'training views' and 'test views' but does not specify clear train/validation/test dataset splits with percentages, absolute counts, or references to predefined validation sets.
Hardware Specification No The paper does not specify the exact hardware used for experiments (e.g., specific GPU or CPU models, memory, or cluster specifications).
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes Please refer to Appendix D for full experimental details. [...] We train all models for 200 epochs, varying the distance between modes and the number of training context points. [...] They are trained for 200 epochs, with contexts of 5 randomly sampled pairs {yi = F(xi), xi U([0, 1])}.