Stacking With Auxiliary Features

Authors: Nazneen Fatema Rajani, Raymond J. Mooney

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experimental Results
Researcher Affiliation Academia Nazneen Fatema Rajani Department of Computer Science University of Texas at Austin nrajani@cs.utexas.edu Raymond J. Mooney Department of Computer Science University of Texas at Austin mooney@cs.utexas.edu
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating the release of open-source code for the described methodology.
Open Datasets Yes We use SWAF to demonstrate new state-of-the-art results on two of the three tasks... The first two are in NLP and part of the NIST Knowledge Base Population (KBP) challenge Cold Start Slot-Filling (CSSF) and Entity Discovery and Linking (EDL) [Ji et al., 2016]. The third task is in computer vision and part of the Image Net 2015 challenge [Russakovsky et al., 2015] Object Detection from images.
Dataset Splits Yes A small random sample of the training set (10%) was used as validation data to set the hyper-parameters for each of the tasks.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory amounts) used for running experiments are provided.
Software Dependencies No The paper mentions software like 'Caffe system' and 'word2vec model', but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes For the two NLP tasks, we used an L1 regularized linear SVM weighted by the number of instances for each class, as a meta-classifier for stacking. For the object detection task, we used an SVM with an RBF kernel. ... we first embed words into vectors by training a word2vec model [Mikolov et al., 2013] with word vector dimension of 100, window-size set to 8, 10 negative samples and 10 iterations on Free Base.