Robustly Train Normalizing Flows via KL Divergence Regularization

Authors: Kun Song, Ruben Solozabal, Hao Li, Martin Takáč, Lu Ren, Fakhri Karray

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Ultimately, we evaluate the performance of NFs on out-of-distribution (Oo D) detection tasks. The excellent results obtained demonstrate the effectiveness of the proposed regularization term. For example, with the help of the proposed regularization, the Oo D detection score increases at most 30% compared with the one without the regularization.
Researcher Affiliation Academia Kun Song1, Ruben Solozabal1, Hao Li2, Martin Tak aˇc1, Lu Ren2*, Fakhri Karray1 1Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE 2Anhui University, Heifei City, Anhui, China {kun.song, ruben.solozabal, martin.takac, fakhri.karray}@mbzu.ac.ae, lihao6897@gmail.com, penny lu@ahu.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available on Git Hub1. 1https://github.com/Optimization-and-Machine-Learning Lab/NFs
Open Datasets Yes The models are trained in CIFAR10/100, featuring colored images of 3x32x32 pixel distributed across 10/100 categories, respectively. [...] The third dataset included is Quick Draw (Ha and Eck 2017), which consists of hand-drawn objects filtered to match the categories in CIFAR10. Lastly, the Tiny-Image Net (Deng et al. 2009) validation dataset is also tested.
Dataset Splits No The paper mentions 'Tiny-Image Net (Deng et al. 2009) validation dataset' but does not provide specific details on training/validation/test splits (percentages, sample counts, or explicit splitting methodology) for all datasets used, nor does it explicitly state cross-validation setup or random seeds for reproducibility of splits within the main text.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory amounts used for running the experiments.
Software Dependencies No The paper states: "The model is implemented using the Framework for Easily Invertible Architectures (Fr EIA) (Ardizzone et al. 2018)." However, it does not provide specific version numbers for FrEIA or any other key software dependencies.
Experiment Setup Yes For those experiments, we set the coefficient parameter of the proposed regularization to α = 0.01 since it produces the best outcome. According to (Ardizzone et al. 2020), we set γ = 1 in this paper, empirically. In the experiments, we set α = {0, 10 3, 10 2, 10 1}.