Classification Under Human Assistance

Authors: Abir De, Nastaran Okati, Ali Zarezade, Manuel Gomez Rodriguez5905-5913

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on synthetic and real-world data from several applications in medical diagnosis illustrate our theoretical findings and demonstrate that, under human assistance, supervised learning models trained to operate under different automation levels can outperform those trained for full automation as well as humans operating alone.
Researcher Affiliation Academia Abir De1*, Nastaran Okati2*, Ali Zarezade2, Manuel Gomez Rodriguez2 1IIT Bombay 2Max Planck Institute for Software Systems
Pseudocode Yes Algorithm 1: Distorted greedy algorithm
Open Source Code Yes Our code and data are available in https://github.com/NetworksLearning/classification-under-assistance
Open Datasets Yes We experiment with three publicly available datasets (Decenci ere et al. 2014; Hoover, Kouznetsova, and Goldbaum 2000), each of them from a different application in medical diagnosis: (i) Messidor: It consists of |V| = 400 eye images... (ii) Stare: It consists of |V| = 373 retinal images... (iii) Aptos: It consists of |V| = 705 retinal images.
Dataset Splits No The paper states using 60% for training and 40% for testing, but does not explicitly mention a validation split.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments (e.g., CPU/GPU models, memory).
Software Dependencies No The paper mentions software components like 'Resnet', 'VGG16', 'PCA', 'multilayer perceptron', and 'logistic regression model' but does not specify version numbers for any of them.
Experiment Setup Yes In all our experiments, we generate |V| = 400 samples... we used λ = 1 and H = 0.2 and, for nonlinear SVMs, we used a quadratic kernel K(xi, xj) = ( 1/2 xi, xj )2. Moreover, we generate the scores provided by human experts h(x) per label by drawing samples from two uniform distributions, i.e., h(x) Unif[ H, H + 1] if y = 1 and h(x) Unif[ 1 + H, H] otherwise. ...we set the value of λ using cross validation under full automation.