Semi-Parametric Inducing Point Networks and Neural Processes

Authors: Richa Rastogi, Yair Schiff, Alon Hacohen, Zhaozhi Li, Ian Lee, Yuntian Deng, Mert R. Sabuncu, Volodymyr Kuleshov

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments, SPIN reduces memory requirements, improves accuracy across a range of metalearning tasks, and improves state-of-the-art performance on an important practical problem, genotype imputation.
Researcher Affiliation Academia Richa Rastogi , Yair Schiff, Zhaozhi Li, Ian Lee, Mert R. Sabuncu, & Volodymyr Kuleshov Cornell University {rr568,yzs2,zl643,yl759,msabuncu,kuleshov}@cornell.edu Alon Hacohen Technion Israel Institute of Technology alonhacohen@campus.technion.ac.il Yuntian Deng Harvard University dengyuntian@seas.harvard.edu
Pseudocode No The paper includes architectural diagrams and mathematical formulations but no structured pseudocode or algorithm blocks.
Open Source Code Yes Code and data used to reproduce experimental results are provided in Appendix C. UCI and Genomic Task Code The experimental results for UCI and genomic task can be reproduced from here. Neural Processes Code The experimental results for the Neural Processes task can be reproduced from here.
Open Datasets Yes Data for Genomics Experiment The vcf file containing genotypes can be downloaded from 1000Genomes chromosome 20 vcf file. Additionally, the microarray used for genomics experiment can be downloaded from Human Omni2.5 microarray. Beagle software, used as baseline, can be obtained from Beagle 5.1. UCI Datasets All UCI datasets can be obtained from UCI Data Repository.
Dataset Splits Yes We use 5008 complete sequences y that we divide into train/val/test splits of 0.86/0.12/0.02, respectively, following Browning et al. (2018b).
Hardware Specification Yes We use 24GB NVIDIA Ge Force RTX 3090, Tesla V100-SXM2-16GB and NVIDIA RTX A6000-48GB GPUs for experiments in this paper.
Software Dependencies No The paper mentions software like ADAM optimizer and Beagle 5.1, but does not provide specific version numbers for its own implementation's software dependencies (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes A batch size of 256 is used for Transformer methods, and we train using the lookahead Lamb optimizer (Zhang et al., 2019). Hyperparameters In Table 10, we provide the range of hyper-parameters that were grid searched for different methods.