Estimation from Indirect Supervision with Linear Moments
Authors: Aditi Raghunathan, Roy Frostig, John Duchi, Percy Liang
ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 7, we apply our framework empirically to our two motivating settings: (i) learning a regression model under local privacy constraints, and (ii) learning a part-of-speech tagger with lightweight annotations. and Figure 6 visualizes the average (over 10 trials) R2 coefficient of fit for linear regression on the test set, in response to varying the privacy parameter . and Figure 7 shows train and test per-position accuracies as we make passes over the dataset. |
| Researcher Affiliation | Academia | Aditi Raghunathan ADITIR@STANFORD.EDU Roy Frostig RF@CS.STANFORD.EDU John Duchi JDUCHI@STANFORD.EDU Percy Liang PLIANG@CS.STANFORD.EDU Stanford University, Stanford, CA |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | We used the Wall Street Journal portion of the Penn Treebank. Sections 0-21 comprise the training set and 22-24 are test. |
| Dataset Splits | No | The paper specifies a training set (Sections 0-21 of Penn Treebank) and a test set (Sections 22-24), but does not explicitly mention a distinct validation set or its split. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU/GPU models, memory, or cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper mentions methods like 'stochastic gradient descent (SGD)' and 'beam search' but does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | We use a beam of size 500 after analytically marginalizing nodes outside the region. |