Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Lifting Weak Supervision To Structured Prediction
Authors: Harit Vishwakarma, Frederic Sala
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluation validates our claims and shows the merits of the proposed method.1 |
| Researcher Affiliation | Academia | Harit Vishwakarma EMAIL Frederic Sala EMAIL Department of Computer Sciences, University of Wisconsin-Madison, WI, USA. |
| Pseudocode | Yes | Algorithm 1 Algorithm for Pseudolabel Construction |
| Open Source Code | Yes | 1https://github.com/Sprocket Lab/WS-Struct-Pred |
| Open Datasets | No | We construct a synthetic dataset whose ground truth comprises n samples of two distinct rankings among the finite metric space of all length-four permutations. [...] We similarly evaluate our estimator using synthetic labels from a hyperbolic manifold, matching the setting of Section 5. |
| Dataset Splits | No | The paper mentions generating synthetic data for experiments and the number of samples 'n', but does not specify details about training, validation, and test splits (e.g., percentages, specific sizes, or cross-validation setup). |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not specify the version numbers for any software dependencies or libraries used in the implementation or experimentation. |
| Experiment Setup | No | The paper describes the synthetic data generation and comparison to prior work, but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rates, batch sizes, epochs for the downstream model), optimizer settings, or other system-level training configurations. |