Top-k Hierarchical Classification

Authors: Sechan Oh

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Via numerical experiments, we show that our solution outperforms two baseline methods that address only one of the two issues. In this section, we perform numerical experiments with real-world data sets. We compare the misclassification cost of our Bayes-optimal classifier (Algorithm K-H) with those of two benchmark methods. Table 2 reports the results from the gene function data.
Researcher Affiliation Industry Sechan Oh Moloco 165 University Avenue Palo Alto, California 94301
Pseudocode Yes Algorithm K-H
Open Source Code No The paper does not provide any concrete access (links, explicit statements) to the source code for the methodology described.
Open Datasets Yes First, we use the twelve functional genomics data sets by (Clare and King 2003; Vens et al. 2008) (https://dtai.cs.kuleuven.be/clus/hmcdatasets/).
Dataset Splits No The paper mentions using "training and validation data sets" but does not specify the exact split percentages, sample counts, or detailed methodology for splitting the data to reproduce the partitioning.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models) used for running the experiments.
Software Dependencies No The paper mentions using a "multinomial logit model with lasso penalty" but does not provide specific version numbers for any software libraries, frameworks, or solvers used in its implementation.
Experiment Setup No The paper specifies parameters for its proposed loss function (e.g., 'cd' values, K, c1, c2) and mentions using a 'multinomial logit model with lasso penalty' for probability estimation. However, it does not provide specific hyperparameters for this learning model (e.g., lasso regularization strength, optimizer details, learning rate, batch size, number of epochs) that would be needed for reproducible training.