Exclusive Feature Learning on Arbitrary Structures via $\ell_{1,2}$-norm

Authors: Deguang Kong, Ryohei Fujimaki, Ji Liu, Feiping Nie, Chris Ding

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both synthetic and real-world datasets validate the proposed method.
Researcher Affiliation Collaboration 1 Dept. of Computer Science, University of Texas Arlington, TX, 76019; 2 NEC Laboratories America, Cupertino, CA, 95014; 3 Dept. of Computer Science, University of Rochester, Rochester, NY, 14627
Pseudocode No No structured pseudocode or clearly labeled algorithm blocks were found.
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described.
Open Datasets Yes To validate the effectiveness of our method, we first conduct experiment using Eq.(7) on two synthetic datasets, and then show experiments on real-world datasets. ... We perform classification tasks on these different datasets. ... Table 1: Characteristics of datasets Dataset # data #dimension #domain isolet 1560 617 UCI ionosphere 351 34 UCI mnist(0,1) 3125 784 image Leuml 72 3571 biology. The paper provides URLs for these datasets in the references: Housing. http://archive.ics.uci.edu/ml/datasets/Housing., Ionosphere. http://archive.ics.uci.edu/ml/datasets/Ionosphere., isolet. http://archive.ics.uci.edu/ml/datasets/ISOLET., mnist. http://yann.lecun.com/exdb/mnist/., Leuml. http://www.stat.duke.edu/courses/Spring01/sta293b/datasets.html.
Dataset Splits Yes We generate n = 1000 data, with p = [120, 140, , 220, 240] and do 5-fold cross validation. ... We generate n = 1000 data, B = 10, with varied p = [20, 21, , 24, 25] and do 5-fold cross validation.
Hardware Specification Yes We run different algorithms on a Intel i5-3317 CPU, 1.70GHz, 8GRAM desktop.
Software Dependencies No The paper mentions using logistic regression as the loss function and Support Vector Machine (SVM) with linear kernel for classification, but does not provide specific version numbers for any software libraries or dependencies used (e.g., Python, scikit-learn, PyTorch versions).
Experiment Setup Yes We solve Eq.(7) using current y and X with least square loss. The group settings are: (i, p+i, 2p+i), for 1 ≤ i ≤ p. ... In our method, parameter α, β are tuned to select different numbers of features. Exclusive LASSO groups are set according to feature correlations (i.e., threshold θ is set to 0.90 in Eq.8).