Structured Output Learning with Abstention: Application to Accurate Opinion Prediction
Authors: Alexandre Garcia, Chloé Clavel, Slim Essid, Florence d’Alche-Buc
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Instantiated on a hierarchical abstention-aware loss, SOLA is shown to be relevant for fine-grained opinion mining and gives state-of-the-art results on this task. Moreover, the abstention-aware representations can be used to competitively predict user-review ratings based on a sentence-level opinion predictor. Section 5 presents the numerical experiments and Section 6 draws a conclusion. |
| Researcher Affiliation | Academia | 1LTCI, Telecom Paris Tech, Paris, France. |
| Pseudocode | No | The paper describes algorithms and steps but does not include a formal pseudocode block or an algorithm section labeled as such. |
| Open Source Code | No | The paper does not provide any explicit statement about making its source code available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We test our model on the problem of aspect-based opinion mining on a subset of the Trip Advisor dataset released in (Marcheggiani et al., 2014). |
| Dataset Splits | No | The paper mentions 'predefined train and test sets' for the Trip Advisor dataset, but it does not specify a validation set split or methodology for creating one. It also does not provide specific percentages or counts for these splits. |
| Hardware Specification | No | The paper does not explicitly describe the hardware (e.g., specific CPU, GPU, or memory details) used to run its experiments. |
| Software Dependencies | No | The paper mentions using 'Infer Sent representation' and states that it built a 'vector-valued regressor', but it does not specify any software versions for libraries, frameworks (like PyTorch, TensorFlow), or specific solvers. It also mentions 'ridge regression' but without specific software versions. |
| Experiment Setup | Yes | In all our experiments, we rely on the expression of the Haloss presented in 4. The linear programming formulation of the pre-image problem used in the branch-and-bound solver is derived in the supplementary material and involves a decomposition similar to the one described in Section 2 for the H-loss. Implementing the Ha-loss requires choosing the weights ci, c Ai and c Aci. We first fix the ci weights in the following way : ci = cp(i) |siblings(i)| i {1, . . . , d}. Here, 0 is assumed to be the index of the root node. As far as the abstention weights c Ai and c Aci are concerned, making an exhaustive analysis of all the possible choices is impossible due to the number of parameters involved. Therefore, our experiments focus on weighting schemes built in the following way: c Ai = KAci c Aci = KAcci The effect of the choices of KA and KAc will be illustrated below on the opinion prediction task. |