Deconvolutional Density Network: Modeling Free-Form Conditional Distributions
Authors: Bing Chen, Mazharul Islam, Jisuo Gao, Lin Wang6183-6192
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments manifest DDN achieves the best results on diversified univariate and multivariate CDE tasks in comparison with six different explicit CDE models. This section examines the efficacy of DDN for explicitly estimating free-form conditional PDF on continuous-domain. |
| Researcher Affiliation | Academia | 1 Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan 250022, China 2 Quancheng Shandong Laboratory, Jinan 250100, China |
| Pseudocode | No | The paper describes the architecture and functionality of the Deconvolutional Density Network (DDN) through text and diagrams (Fig. 2), but it does not provide a formal pseudocode block or algorithm. |
| Open Source Code | Yes | The code of DDN is available at https://github.com/NBICLAB/DDN |
| Open Datasets | Yes | The validation also included seven real-world datasets from the UCI machine learning repository (Dua and Graff 2017). |
| Dataset Splits | Yes | For each trial, we randomly split the data into a training set and test set with a ratio of 3:7, to mimic data deficiency scenario. |
| Hardware Specification | Yes | All experiments were running on a Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' but does not specify software dependencies like programming languages (e.g., Python), libraries (e.g., PyTorch, TensorFlow), or their specific version numbers. |
| Experiment Setup | Yes | The learning rate was set to 3e-4, and the batch size was 256. The models were trained independently for 10 trials on each dataset, with 3000 epochs per experiment. In all experiments, the estimator of our DDN contained two Upsample-Conv-Batch Norm-Leaky Re LU blocks and one Upsample-Conv block in each target dimension. We configured the blocks so that the output space was partitioned into N = 256 bins. |