Robust Weak Supervision with Variational Auto-Encoders

Authors: Francesco Tonolini, Nikolaos Aletras, Yunlong Jiao, Gabriella Kazai

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An extensive empirical evaluation on a standard WS benchmark shows that our WSVAE is competitive to state-of-the-art methods and substantially more robust to LF engineering.
Researcher Affiliation Collaboration 1Amazon 2Computer Science Department, University of Sheffield.
Pseudocode No The paper describes the model architecture and equations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper mentions the use of the Wrench framework as a publicly available benchmark but does not provide a statement or link for the open-source code of the WS-VAE itself.
Open Datasets Yes We test our WS-VAE against several state-of-the-art WS methods on Wrench (Zhang et al., 2021b), a standard publicly available WS benchmark which consists of various tasks. Our experiments are performed on the following 6 benchmark data-sets for binary classification tasks, made available with pre-computed weak labels in the Wrench framework (Zhang et al., 2021b): You Tube (Alberto et al., 2015), IMDB (Maas et al., 2011), SMS (G omez Hidalgo et al., 2006), Tennis Rally (Fu et al., 2020; Zhang et al., 2021b), Commercial (Fu et al., 2020; Zhang et al., 2021b), and Census (Kohavi et al., 1996).
Dataset Splits No Validation sample size is omitted in the above descriptions, as we do not use validation labels to fine-tune hyper parameters or perform early stopping.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., specific GPU or CPU models, memory, or cluster configurations).
Software Dependencies No The paper mentions several software components like Tensorflow, ADAM, BERT, Wrench, and Snorkel, but it does not specify any version numbers for these dependencies.
Experiment Setup Yes In all experiments the WS-VAE is trained with the Tensorflow ADAM optimiser for 10, 000 iterations, a batch size of 32 and an initial training rate of 0.001. With these hyper-parameters, the WS-VAE was observed to converge its cost function in all tested conditions and all data-sets. The optimiser is set to maximise the ELBO of equation 6 with γ = 100 in all experiments... The latent dimensionality is also kept the same for all experiment and is equal to 10. (Further details for baselines also provided in Section B.3)