Tau-FPL: Tolerance-Constrained Learning in Linear Time

Authors: Ao Zhang, Nan Li, Jian Pu, Jun Wang, Junchi Yan, Hongyuan Zha

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Both theoretical analysis and experimental results show superior performance of the proposed τ-FPL over the existing approaches. ... Experiment Results
Researcher Affiliation Collaboration 1Shanghai Key Laboratory of Trustworthy Computing, MOE International Joint Lab of Trustworthy Software, School of Computer Science and Software Engineering, East China Normal University, Shanghai, China 2Institute of Data Science and Technologies, Alibaba Group, Hangzhou, China 3IBM Research China 4Georgia Institute of Technology, Atlante, USA
Pseudocode Yes Algorithm 1 τ-FPL Ranking ... Algorithm 2 Linear-time Projection on Top-k simplex
Open Source Code No The paper does not include an explicit statement about releasing the source code for the described methodology or provide a direct link to a code repository.
Open Datasets Yes We evaluate the performance on publicly benchmark datasets with different domains and various sizes3. 3https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary
Dataset Splits Yes For small scale datasets( 10, 000 instances), 30 times stratified hold-out tests are carried out, with 2/3 data as train set and 1/3 data as test set. For large datasets, we instead run 10 rounds. In each round, hyper-parameters are chosen by 5fold cross validation from grid
Hardware Specification Yes All experiments are running on an Intel Core i5 Processor.
Software Dependencies No The paper describes algorithms and methods but does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks).
Experiment Setup No The paper states that 'hyper-parameters are chosen by 5fold cross validation from grid' and 'the regularization parameter R is selected to minimize (3)', but it does not provide specific hyperparameter values used (e.g., learning rate, batch size, number of epochs) in the main text.