Variational Weighting for Kernel Density Ratios

Authors: Sangwoong Yoon, Frank Park, Gunsu YUN, Iljung Kim, Yung-Kyun Noh

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments, We first demonstrate in Fig. 3 how the use of VWKDE alters log probability density ratio (LPDR) and K-L divergence toward a better estimation. Table 1: Performances for defect surface detection (left) and defect localization (right).
Researcher Affiliation Collaboration Sangwoong Yoon Korea Institute for Advanced Study swyoon@kias.re.kr Frank C. Park Seoul National University / Saige Research fcp@snu.ac.kr Gunsu Yun POSTECH gunsu@postech.ac.kr Iljung Kim Hanyang University iljung0810@hanyang.ac.kr Yung-Kyun Noh Hanyang University / Korea Institute for Advanced Study nohyung@hanyang.ac.kr
Pseudocode Yes Algorithm 1 Model-free and Algorithm 2 Model-based
Open Source Code Yes Code is available at https://github.com/swyoon/variationally-weighted-kernel-density-estimation
Open Datasets Yes For the evaluation of the algorithm, we use a publicly available dataset for surface inspection: DAGM2. The dataset contains six distinct types of normal and defective surfaces. and Data access: https://hci.iwr.uni-heidelberg.de/node/3616
Dataset Splits No Appendix H states: 'There are 1,150 images per class, half of which is for training and the remaining is for testing.' This specifies training and testing, but no explicit separate validation split is mentioned.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments.
Software Dependencies No The paper does not specify version numbers for any software components, programming languages, or libraries used in the experiments.
Experiment Setup Yes The structure of our CNN is Conv(20)-Conv(20)-Max Pool-Conv(20)-Conv(20)Max Pool-FC(20)-Drop Out-FC(1), where Conv is a 3x3 convolution layer, Max Pool is a 2x2 max pooling layer, FC is a fully connected layer, and Drop Out is a drop out operation with probability 0.5. We use binary cross entropy loss for objective function and ADAM for optimization.