Diagnosis determination: decision trees optimizing simultaneously worst and expected testing cost

Authors: Ferdinando Cicalese, Eduardo Laber, Aline Medeiros Saettler

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our algorithm builds a strategy (decision tree) which attains a logarithmic approximation simultaneously for the expected and worst cost spent. This is best possible since, under standard complexity assumption, no algorithm can guarantee o(log n) approximation. We will show that Algorithm 2 attains a logarithmic approximation for DFEP. The proof of Theorem 3 will now follow by combining the sequences provided by the previous three lemmas.
Researcher Affiliation Academia University of Salerno, Italy PUC-Rio, Brazil
Pseudocode Yes Algorithm 2 Decision tree with cost O(log n)OPTE(I) Procedure Dec Tree(S, T, C, p, c)
Open Source Code No The information is insufficient. The paper does not contain any statement about making the source code for their methodology publicly available, nor does it provide any links to a code repository.
Open Datasets No The information is insufficient. The paper is theoretical and does not describe experiments that would involve using a training dataset. It provides an illustrative example in Figure 1, but this is not a dataset for experimental training.
Dataset Splits No The information is insufficient. The paper is theoretical and does not describe experiments that would involve a validation dataset.
Hardware Specification No The information is insufficient. The paper is theoretical and focuses on algorithm design and analysis. It does not mention any specific hardware used for running experiments.
Software Dependencies No The information is insufficient. The paper is theoretical and does not specify any software dependencies with version numbers required to reproduce experiments or implementations.
Experiment Setup No The information is insufficient. The paper is theoretical and focuses on algorithm design and analysis. It does not provide details on experimental setup such as hyperparameters or training configurations.