Knowledge Distillation via Constrained Variational Inference

Authors: Ardavan Saeedi, Yuria Utsumi, Li Sun, Kayhan Batmanghelich, Li-wei Lehman8132-8140

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of our framework on two real-world tasks of disease subtyping and disease trajectory modeling. We demonstrate the flexibility of our method by applying it to two real-world tasks of disease subtyping in Chronic Obstructive Pulmonary Disease (COPD) and disease trajectory modeling in MIMIC-III dataset (Johnson et al. 2016). We show knowledge distillation in probabilistic graphical models can improve their predictive performance while not degrading their generative performance.
Researcher Affiliation Collaboration Ardavan Saeedi,1 Yuria Utsumi,2 Li Sun,3 Kayhan Batmanghelich,3 Li-wei H. Lehman2 1 Hyperfine* 2 Massachusetts Institute of Technology 3 University of Pittsburgh
Pseudocode No The paper describes its methods and processes mathematically but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about making the source code available, nor does it provide a link to a code repository.
Open Datasets Yes We base our evaluation on a large-scale dataset from COPDGene study (Regan et al. 2011) with lung CT images for 7,292 subjects. We extract the cohort from MIMIC-III (Johnson et al. 2016), a public, de-identified critical care database.
Dataset Splits Yes We randomly split the data into a 70% train, 15% validation and 15% test splits. We randomly split the dataset into 70% train, 15% validation, and 15% test splits.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions probabilistic programming languages like Stan and frameworks like ADVI and AEVB, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, or specific library versions) required to replicate the experiments.
Experiment Setup No The paper states 'See appendix for the hyperparameter setting and details of all the experiments,' indicating that these specific details are not present in the main text.