Deep Self-Dissimilarities as Powerful Visual Fingerprints
Authors: Idan Kligvasser, Tamar Shaham, Yuval Bahat, Tomer Michaeli
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Multiple sections describe experiments, evaluations on datasets (Pie APP, DIVK2K, BSD100, REDS), and present quantitative results using metrics like Pearson correlation, SSIM, NIQE, PSNR, LPIPS in tables and figures. For example, "We evaluate this measure in a perceptual image quality assessment task. We use the Pie APP dataset [40], which contains 4800 pairs of images, where one image is the reference and the other is a distorted version of that reference." (Section 3.4) and "Table 1 presents the results for several different layers ℓof the VGG network." (Section 3.4) |
| Researcher Affiliation | Academia | "Idan Kligvasser Technion Israel Institute of Technology kligvasser@campus.technion.ac.il Tamar Rott Shaham Technion Israel Institute of Technology stamarot@campus.technion.ac.il Yuval Bahat Technion Israel Institute of Technology yuval.bahat@campus.technion.ac.il Tomer Michaeli Technion Israel Institute of Technology tomer.m@ee.technion.ac.il" |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an unambiguous statement about releasing code for the described methodology or a direct link to a source-code repository. |
| Open Datasets | Yes | "We train the regression network ψ... on the BSD training set [30] containing 400 clean images... Training is done over 800 LR-HR image pairs from the DIVK2K dataset [1]... Training is done using the REDS dataset [39] consisting of 30, 000 image pairs {x, y} from 300 different scenes." (Sections 3.5, 4.1, 4.2) |
| Dataset Splits | No | While the paper mentions using specific datasets like the 'BSD training set' and 'REDS validation set', it does not provide explicit details about the dataset splits, such as exact percentages, sample counts for each split, or how these splits were generated (e.g., random seed, stratified). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions various models and optimizers (e.g., 'VGG-19 network', 'Adam optimizer', 'x SRRes Net architecture'), but does not provide specific version numbers for software dependencies or programming languages (e.g., Python, PyTorch versions). |
| Experiment Setup | Yes | "We train our networks for 300K epochs using the Adam optimizer with a batch size of 16 and an initial learning-rate of 2 10 4, which is halved after 90K, 180K and 270K steps. See Supplementary Material (SM) for full training details." (Section 4) |