Evidential Uncertainty and Diversity Guided Active Learning for Scene Graph Generation

Authors: Shuzhou Sun, Shuaifeng Zhi, Janne Heikkilä, Li Liu

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our AL framework can approach the performance of a fully supervised SGG model with only about 10% annotation cost. Furthermore, our ablation studies indicate that introducing AL into the SGG will face many challenges not observed in other vision tasks that are successfully overcome by our new modules. (...) Extensive experimental results on the SGG benchmarks demonstrate that EDAL can significantly save human annotation costs, approaching the performance of a fully supervised model with only about 10% labeling cost.
Researcher Affiliation Academia Shuzhou Sun1, Shuaifeng Zhi2, Janne Heikkilä1, Li Liu2,1, 1University of Oulu, 2National University of Defense Technology
Pseudocode Yes A pseudo-code of EDAL is given in Appendix A.1. (...) Algorithm 1 Evidential Uncertainty and Diversity Guided Deep Active Learning (EDAL)
Open Source Code No The paper mentions using a 'Transformer-based backbone (Shi & Tang, 2020)' and cites a GitHub repository for it. However, this is code for a third-party backbone used in their experiments, not the source code for the EDAL methodology itself or for the specific implementation of their novel modules (EDL, RPG, CBM, IBM).
Open Datasets Yes We show the effectiveness of EDAL on the popular VG150 dataset (Xu et al., 2017) for the SGG task through extensive experiments.
Dataset Splits Yes To prepare the AL dataset, we first remove all the relationship annotations from the official training set (about 56k images) to create the initial unlabeled pool U with object annotations only. We randomly sample from U at the rate of p1 in the first round and start the AL framework. For the following i-th round (i ≥ 2), sampling with the rate of pi by the proposed method is used to obtain Si,ℓi until the label budget is exhausted. Specifically, in our experiments, p1=0.01, pi=0.01, i = 2, 3, .... We set the label budget to be 10%, i.e., to label 10% of the data in U.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments. It only discusses the SGG backbones and experimental setup parameters.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes For τ in CBM and eℓt = ℓt + λ |Ut| tη in IBM, we set τ = 0.2, λ = 10-5 and η = 4. (...) Specifically, in our experiments, p1=0.01, pi=0.01, i = 2, 3, . We set the label budget to be 10%...