Representation Learning of Compositional Data

Authors: Marta Avalos, Richard Nock, Cheng Soon Ong, Julien Rouar, Ke Sun

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on simulated data and microbiome data show the promise of our method.
Researcher Affiliation Academia Université de Bordeaux, Data61, the Australian National University and the University of Sydney first.last@{u-bordeaux.fr,data61.csiro.au}
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The source codes to reproduce our experimental results are available online2. 2https://bitbucket.org/Richard Nock/coda
Open Datasets Yes We consider the following datasets available in the microbiome R package [18], each of which is randomly split into a training set (90%) and a testing set (10%). The HITChip Atlas dataset [17] contains 130 genus-level taxonomic groups... The two-week diet swap study... was reported in [28].
Dataset Splits No each of which is randomly split into a training set (90%) and a testing set (10%).
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running experiments.
Software Dependencies No The paper mentions software like L-BFGS and ELU units, and the 'microbiome R package [18]', but does not provide specific version numbers for the software components used in their experiments.
Experiment Setup Yes the encoding map is modeled by a feed-forward neural network with two hidden layers of ELU [11] units, each of size 100. [...] Our implementation simply uses L-BFGS [9] based on the gradient of the loss. [...] Both clr-AE and Co DA-AE use exactly the same structure with one hidden layer of 100 ELU [11] units in their decoders. [...] we add a small random noise to the encoder input so as to avoid overfitting.