Fusing Conditional Submodular GAN and Programmatic Weak Supervision

Authors: Kumar Shubham, Pranav Sastry, Prathosh AP

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on multiple datasets and demonstrate the efficacy of our methods on several tasks vis-a-vis the current state-of-the-art methods. We assess the effectiveness of our method by experimenting with it on different datasets and label functions. Specifically, we use the label functions provided by the authors of WSGAN (Boecking et al. 2023). The primary experiments are done on five datasets, namely Animals with Attributes 2 (AWA2) (Xian et al. 2018), Domain Net (Peng et al. 2019), CIFAR10 (Krizhevsky, Hinton et al. 2009), MNIST (Le Cun et al. 1998), Fashion MNIST (Xiao, Rasul, and Vollgraf 2017) and German Traffic Sign Benchmark (GTSRB) (Stallkamp et al. 2012). Table 1: Comparison between the average posterior accuracy of the label models for samples with at least one vote from the label function. Table 2: Comparison of image quality (mean FID) score of the proposed method with WSGAN.
Researcher Affiliation Academia Kumar Shubham, Pranav Sastry, Prathosh AP Indian Institute of Science, Bangalore, India shubhamkuma3@iisc.ac.in, pranavsastry@iisc.ac.in, prathosh@iisc.ac.in
Pseudocode No The paper describes algorithms and formulations mathematically and in prose, but does not include structured pseudocode or algorithm blocks with specific labels like 'Algorithm X'.
Open Source Code Yes Our implementation is available at https://github.com/kyrs/subpws-gan
Open Datasets Yes The primary experiments are done on five datasets, namely Animals with Attributes 2 (AWA2) (Xian et al. 2018), Domain Net (Peng et al. 2019), CIFAR10 (Krizhevsky, Hinton et al. 2009), MNIST (Le Cun et al. 1998), Fashion MNIST (Xiao, Rasul, and Vollgraf 2017) and German Traffic Sign Benchmark (GTSRB) (Stallkamp et al. 2012).
Dataset Splits No Selfsupervised learning: Experiments related to CIFAR10-B, MNIST, GTSRB, and Fashion MNIST use label functions generated by finetuning a shallow MLP network over a small validation dataset.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. It only vaguely refers to 'GPU compute' in the acknowledgments without specifications.
Software Dependencies No We used Apricot library (Schreiber, Bilmes, and Noble 2020) to perform submodular optimization for subset selection using the lazy greedy method. All the major experiments are conducted using a DCGAN (Radford, Metz, and Chintala 2015) architecture. ... Additionally, we have provided a network ablation over style GANADA (Karras et al. 2020) architecture... We utilized the FID (Heusel et al. 2017) score... We have conducted a comparative analysis with different label models, including Majority Voting (MV), Me Ta L (Ratner et al. 2019), Flying Squid (FS) (Fu et al. 2020), Snorkel (Ratner et al. 2020), hyper-label model(HLM) (Wu et al. 2023) and Dawid Skene (DS) (Dawid and Skene 1979). To facilitate this comparison, we employed the label model codebase from Wrench (Zhang et al. 2021) and utilized the official codebase provided by WSGAN and hyper-label model. (No version numbers are provided for any of these software dependencies).
Experiment Setup No In the current design, the discriminator network shares weights with the classifier and accuracy parameter-based model. Further, we have provided a network ablation over style GANADA (Karras et al. 2020) architecture under a similar configuration for CIFAR10-B and high-resolution images like LSUN (Yu et al. 2015) in the supplementary material, which also includes the hyperparameters of subset selection and other implementation details. (The lack of specific hyperparameter values in the main text, and explicit mention of them being in supplementary material, leads to a 'No' answer).