Greedy Flipping for Constrained Word Deletion

Authors: Jin-ge Yao, Xiaojun Wan

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that the proposed method achieves nearly identical performance with explicit ILP formulation while being much more efficient. We evaluate our methods on two corpora that were annotated by human annotators.
Researcher Affiliation Academia Jin-ge Yao, Xiaojun Wan Institute of Computer Science and Technology, Peking University, Beijing 100871, China The MOE Key Laboratory of Computational Linguistics, Peking University {yaojinge, wanxiaojun}@pku.edu.cn
Pseudocode Yes Algorithm 1 Randomized constrained greedy flipping Input: sentence x with dependency tree t; scoring function: F(x, d) Output: Compression bit vector: d 1: Randomly initialize bit vector d(0); k 0; 2: d(0) pre specification(d(0)) 3: repeat 4: list top-down node list of d(k); 5: for each word/node in list do 6: d(k) simul constraints(x, t, d(k)) 7: d LS with constraints(x, t, d(k)) 8: d(k+1) argmaxd {d(k),d } F(x, d); 9: k k + 1; 10: end for 11: until no change made in this iteration 12: return d = d(k)
Open Source Code No The paper does not provide an explicit statement or link for the availability of its source code.
Open Datasets Yes We evaluate our methods on two corpora that were annotated by human annotators 4. ... 4Available at http://jamesclarke.net/research/resources
Dataset Splits Yes We split the datasets into training, development and test sets according to (Galanis and Androutsopoulos 2010).
Hardware Specification No The paper does not specify the hardware used for the experiments.
Software Dependencies No The paper mentions 'SRILM' and 'GLPK' as tools used, but does not provide specific version numbers for them.
Experiment Setup Yes We trained the model using the structured perceptron (Collins 2002) modified with Ada Grad updates (Duchi, Hazan, and Singer 2011) 3: wt+1 i wt i η t τ=1(sτ i )2 st i, (15) where st = f(xt, yt ) f(xt, yt) is the subgradient for instance t and η is the learning rate that is set to be constant 1.0 in this work. RCGF: The counterparts of ILP models using our proposed randomized constrained greedy flipping algorithm with K = min{300, 2|x|} different initializations. We take the discriminative weights after five training epochs as they perform well on the development sets.