Learning with Algorithmic Supervision via Continuous Relaxations

Authors: Felix Petersen, Christian Borgelt, Hilde Kuehne, Oliver Deussen

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed continuous relaxation model on four challenging tasks and show that it can keep up with relaxations specifically designed for each individual task.
Researcher Affiliation Collaboration Felix Petersen University of Konstanz felix.petersen@uni.kn Christian Borgelt University of Salzburg christian@borgelt.net Hilde Kuehne University of Frankfurt MIT-IBM Watson AI Lab kuehne@uni-frankfurt.de Oliver Deussen University of Konstanz oliver.deussen@uni.kn
Pseudocode No Pseudo-code for all algorithms as well as additional information on the relaxation and the inverse temperature parameter can be found in the supplementary material.
Open Source Code Yes The implementation of this work including a high-level Py Torch [36] library for automatic continuous relaxation of algorithms is openly available at github.com/Felix-Petersen/algovision.
Open Datasets Yes a set of four-digit numbers based on concatenated MNIST digits [29] is given...data set of 13 object classes from Shape Net [30]...Warcraft terrains...EMNIST data set [12]
Dataset Splits No For each task, we tune this parameter on a validation set.
Hardware Specification No The paper mentions general hardware terms like 'GPUs' but does not provide specific details such as model numbers, processor types, or memory amounts.
Software Dependencies No The paper mentions 'Py Torch [36]' and 'Adam optimizer [38]' but does not specify their version numbers or other software dependencies with specific versions.
Experiment Setup Yes For training, we use the Adam optimizer [38] with a learning rate of 10 4 for between 1.5 105 and 1.5 106 iterations...train for 50 epochs with a batch size of 70...maximum batch size of 2...For training, we use an inverse temperature of β = 9 and Adam (η = 10 4) for 128 512 iterations.