On Learning Latent Models with Multi-Instance Weak Supervision
Authors: Kaifu Wang, Efthymia Tsamoura, Dan Roth
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Furthermore, we conclude with empirical experiments for learning under unknown transitions. The empirical results align with our theoretical findings, exposing also the issue of scalability in the weak supervision literature. |
| Researcher Affiliation | Collaboration | Kaifu Wang University of Pennsylvania kaifu@sas.upenn.edu Efthymia Tsamoura Samsung AI efi.tsamoura@samsung.com Dan Roth University of Pennsylvania danroth@seas.upenn.edu |
| Pseudocode | No | The paper does not contain a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper states "For each neurosymbolic framework, our implementation built upon the sources made available by the authors," indicating usage of existing code, but does not provide a statement or link for the open-sourcing of their own implementation. |
| Open Datasets | Yes | We aim to learn an MNIST classifier using the weighted sum of 2, 3 and 4 MNIST digits and assuming that the weights are unknown, as in Example 7. |
| Dataset Splits | No | The paper mentions "training samples" and "pretraining" but does not explicitly state or define a validation set split for hyperparameter tuning or early stopping. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU models, or memory) used to run the experiments. |
| Software Dependencies | Yes | The neural classifiers for all frameworks but ABL were built using Py Torch 2.0.0 and Python 3.9. For ABL, we aligned with the implementation provided by the authors and used Tensorflow 2.11.0 and Keras 2.11.0. |
| Experiment Setup | Yes | The layers of the MNIST digit classifier are as follows: Conv2d(1, 6, 5), Max Pool2d(2, 2), Re LU(True), Conv2d(6, 16, 5), Max Pool2d(2, 2), Re LU(True), Linear(16 * 4 * 4, 120), Re LU(), Linear(120, 84), Re LU(), Linear(84, N), Softmax(1). |