Transductive Optimization of Top

Authors: k

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments and analysis confirm the benefit of incoporating k in the learning process. In our experimental evaluations, the performance of TTK matches or exceeds existing state-of-the-art methods on 7 benchmark datasets for binary classification and 3 reserve design problem instances.
Researcher Affiliation Collaboration Li-Ping Liu,1 Thomas G. Dietterich,1 Nan Li, 2 Zhi-Hua Zhou2 1EECS, Oregon State University, Corvallis, Oregon 97331, USA 2National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China {liuli@eecs.oregonstate.edu, tgd@oregonstate.edu}, {lin, zhouzh}@lamda.nju.edu.cn Nan Li is now working at Alibaba Group, Hangzhou China.
Pseudocode Yes Algorithm 1 Find a descending feasible direction
Open Source Code No The paper does not provide any specific links or explicit statements about releasing source code for the methodology described.
Open Datasets Yes Seven datasets, {diabetes, ionosphere, sonar, spambase, splice} from UCI repository and {german-numer, svmguide3} from the LIBSVM web site, are widely studied binary classification datasets. The other three datasets, NY16, NY18 and NY88, are three species distribution datasets extracted from a large e Bird dataset [Sullivan et al., 2009]
Dataset Splits Yes Each algorithm is run 10 times on 10 random splits of each dataset. Each of these algorithms requires setting the regularization parameter C. This was done by performing five 2-fold internal cross-validation runs within each training set and selecting the value of C from the set {0.01, 0.1, 1, 10, 100} that maximized precision on the top 5% of the (cross-validation) test points.
Hardware Specification No The paper does not provide specific details regarding the hardware used for running the experiments.
Software Dependencies No The paper mentions software like Gurobi and Univer SVM implementation but does not provide specific version numbers for these or other software dependencies. For example, it states: "We used Gurobi[Gurobi Optimization, 2015]." without specifying a version number for Gurobi itself.
Experiment Setup Yes We set k to select 5% of the test instances. For the SVM and AATP methods, we fit them to the training data and then obtain a top-k prediction by adjusting the intercept term b. The hyper-parameter C is set to 1 for all methods.