Core Dependency Networks

Authors: Alejandro Molina, Alexander Munteanu, Kristian Kersting

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To corroborate our theoretical results, we empirically evaluated the resulting Core DNs on real data sets. The results demonstrate significant gains over no or naive sub-sampling, even in the case of count data.
Researcher Affiliation Academia Alejandro Molina alejandro.molina@tu-dortmund.de CS Department TU Dortmund, Germany; Alexander Munteanu alexander.munteanu@tu-dortmund.de CS Department TU Dortmund, Germany; Kristian Kersting kersting@cs.tu-darmstadt.de CS Department and Centre for Cognitive Science TU Darmstadt, Germany
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes To this aim, we implemented (C)DNs in Python1. All experiments ran on a Linux machine (56 cores, 4 GPUs, and 512GB RAM). All DNs were trained using Iteratively reweighted least squares (IRWLS), however, coresets do not depend on the learning algorithm used. 1https://github.com/alejandromolinaml/Core DNs
Open Datasets Yes We used the MNIST2 data set of handwritten labeled digits. ... 2http://yann.lecun.com/exdb/mnist/ The second dataset contains traffic count measurements on selected roads around the city of Cologne in Germany (Ide et al. 2015). ... NIPS3 bag-of-words dataset. ... 3https://archive.ics.uci.edu/ml/datasets/bag+of+words
Dataset Splits Yes For each dataset, we performed ten fold cross-validation for training a full DN (Full) using all the data, leverage score sampling coresets (CDNs), and uniform samples (Uniform), for different sample sizes.
Hardware Specification Yes All experiments ran on a Linux machine (56 cores, 4 GPUs, and 512GB RAM).
Software Dependencies No The paper states 'implemented (C)DNs in Python' but does not provide specific version numbers for Python or any other software libraries or dependencies.
Experiment Setup No The paper mentions that 'All DNs were trained using Iteratively reweighted least squares (IRWLS)' but does not provide specific hyperparameters such as learning rate, batch size, or number of epochs.