Learning with Unsure Responses

Authors: Kunihiro Takeoka, Yuyang Dong, Masafumi Oyamada230-237

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on realworld and synthetic data demonstrate the performance of our method and its superiority over baseline methods.
Researcher Affiliation Industry Kunihiro Takeoka NEC Corporation k takeoka@nec.com Yuyang Dong NEC Corporation dongyuyang@nec.com Masafumi Oyamada NEC Corporation oyamada@nec.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release) for the methodology's source code.
Open Datasets Yes For synthetically labeled data, we picked up 700 images as training data and 7000 images as testing data, which representing 0 or 6 from the MNIST (Le Cun et al. 1998) image dataset. [...] We picked up 200 dog images and 200 wolf images from the Image Net Dataset (Deng et al. 2009)
Dataset Splits Yes The hyper-parameters are tuned with the validation data. [...] We take 100 images, which is the 25% responses from an annotator, as the training data, and use the remained 300 images as the test data.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'Adam (Adaptive moment estimation) optimization algorithm' but does not provide specific version numbers for any software dependencies (e.g., programming languages, libraries, or frameworks).
Experiment Setup Yes The hyper-parameters are tuned with the validation data. We used Adam (Adaptive moment estimation) optimization algorithm in our experiment. [...] we found that the performance seems appropriate to set the γ in the value range of 1 |U| < γ 1.