Deep Learning of Determinantal Point Processes via Proper Spectral Sub-gradient

Authors: Tianshu Yu, Yikang Li, Baoxin Li

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show the effectiveness of our algorithm, indicating promising performance for practical learning problems. [...] In this section, we conduct two experiments. One is about metric learning and image hashing on MNIST and CIFAR to verify the effectiveness of our method, while another is for local descriptor retrieval task based on Hard Net (Mishchuk et al., 2017).
Researcher Affiliation Academia Tianshu Yu, Yikang Li, Baoxin Li Arizona State University {tianshuy,yikang.li,baoxin.li}@asu.edu
Pseudocode Yes Algorithm 1 DPPSG [...] Algorithm 2 DPPSG*
Open Source Code No The paper does not contain any explicit statement about releasing the source code for its methodology or provide a link to a code repository.
Open Datasets Yes MNIST This simple dataset is suitable to reveal the geometric properties of the features on various tasks. [...] CIFAR 10 image hashing [...] CIFAR 100 metric learning [...] This test utilizes the UBC Phototour dataset (Brown & Lowe, 2007)
Dataset Splits Yes We follow the protocol in Mishchuk et al. (2017) to treat two subsets as the training set and the third one as the testing set.
Hardware Specification Yes We report the average overhead comparison on CIFAR-10 hashing task with varying batch sizes (100, 200, 250, 400 and 500) on a GTX 1080 GPU as in Table 6 (time in seconds):
Software Dependencies No The paper mentions using 'Pytorch' but does not specify a version number for it or any other software dependencies.
Experiment Setup Yes MNIST Some parameters are set as follows: α = 5, λ1 = 103, λ2 = 106, margin µ = 0.8, variance for Gaussian kernel σ = 0.2 and = 10 7. During the training, the batch size is set to 200. In each iteration of DPP and WGAN training, we uniformly sample 2, 000 adversarial points from the space [ 1, 1]2. We adopt RMSprop and the learning rate is 10 4 for all tests.