Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Latent Bottlenecked Attentive Neural Processes
Authors: Leo Feng, Hossein Hajimirsadeghi, Yoshua Bengio, Mohamed Osama Ahmed
ICLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate Latent Bottlenecked Attentive Neural Processes (LBANPs) on several tasks: meta regression, image completion, and contextual bandits. These experiment settings have been used extensively to benchmark NP models in prior works (Garnelo et al., 2018a; Kim et al., 2019; Lee et al., 2020; Nguyen & Grover, 2022). We compare LBANPs with the following members of the NP family: Conditional Neural Processes (CNPs) (Garnelo et al., 2018a), Neural Processes (NPs) (Garnelo et al., 2018b), Bootstrapping Neural Processes (BNPs) (Lee et al., 2020), and Transformer Neural Processes (TNPs) (Nguyen & Grover, 2022). In addition, we compare with their attentive variants (Kim et al., 2019) (CANPs, ANPs, and BANPs). |
| Researcher Affiliation | Collaboration | Leo Feng Mila Université de Montréal & Borealis AI EMAIL Hossein Hajimirsadeghi Borealis AI EMAIL Yoshua Bengio Mila Université de Montréal EMAIL Mohamed Osama Ahmed Borealis AI EMAIL |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. The methodology is described using prose and mathematical equations. |
| Open Source Code | Yes | The code is available at https://github.com/Borealis AI/latent-bottlenecked-anp. |
| Open Datasets | Yes | For these experiments, we consider two datasets: EMNIST (Cohen et al., 2017) and Celeb A (Liu et al., 2015). |
| Dataset Splits | No | The paper describes sampling context and target datapoints for meta-learning tasks (e.g., 'N ~ U[3, 197) context datapoints are sampled, and M ~ U[3, 200 - N) target datapoints are sampled.'), but it does not specify fixed training/validation/test dataset splits for the overall datasets (EMNIST, Celeb A) needed for general reproducibility of splits. |
| Hardware Specification | Yes | All experiments were either run on a GTX 1080ti (12 GB RAM) or P100 GPU (16 GB RAM). When verifying if computationally expensive models were trainable for Celeb A64 and Celeb A128, we used the P100 GPU (the GPU with larger amounts of RAM). |
| Software Dependencies | No | The paper mentions using 'the implementation of the baselines from the official repository of TNPs' but does not specify particular software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | For simplicity, we set DC = DL = DQ = 64, following TNP s embedding size of 64. We do not tune the number of latent vectors (L). Instead, we showed results for LBANP with L = 8 and L = 128 latent vectors. The remainder of the hyperparameters is the same for LBANP, EQTNP, and TNP. |