Deep Message Passing on Sets

Authors: Yifeng Shi, Junier Oliva, Marc Niethammer5750-5757

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In addition to demonstrating the interpretability of our model by learning the true underlying relational structure experimentally, we also show the effectiveness of our approach on both synthetic and real-world datasets by achieving results that are competitive with or outperform the state-of-the-art. For readers who are interested in the detailed derivations of serveral results that we present in this work, please see the supplementary material at: https://arxiv.org/abs/1909.09877. Experiments We apply DMPS and its extensions to a range of synthetic-toy and real-world datasets. For each experiment, we compare our methods against, to the best of our knowledge, the state-of-the-art results for that dataset.
Researcher Affiliation Academia Yifeng Shi, Junier Oliva, Marc Niethammer Department of Computer Science, UNC-Chapel Hill, USA {yifengs, joliva, mn}@cs.unc.edu
Pseudocode Yes Algorithm 1: Deep Message Passing on Sets with the Set-denoising Block
Open Source Code No The paper does not explicitly state that source code for the methodology is provided, nor does it provide a direct link to a code repository. The arXiv link is for supplementary material, which typically includes more text/derivations, not necessarily code.
Open Datasets Yes To test the model s ability to model set-structured data relationally, Lee et al. (2019) proposed the task of counting unique characters using the characters dataset (Lake, Salakhutdinov, and Tenenbaum 2015)... We apply DMPS and its variants to the Model Net40 dataset (Chang et al. 2015)... The breast cancer dataset introduced in Gelasca et al. (2008) consists of 58 weakly-labeled 896 768 H&E images.
Dataset Splits No The paper mentions using "test results" and "training stage" but does not provide specific details on the train/validation/test dataset splits or their percentages/counts. It defers to supplementary material for some details, but the main paper does not provide them.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, or cloud computing specifications used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies (libraries, frameworks, or specialized packages) with version numbers that would be needed to replicate the experiments.
Experiment Setup Yes Unless otherwise specified, three message passing steps, set-denoising blocks, or set-residual blocks are stacked to form the final model. We emphasize that we align as much architectural choices, such as learning rate, number of training batches, batch size, etc., as we can with Lee et al. (2019) for fair comparison.