Uncertainty-Aware Hierarchical Refinement for Incremental Implicitly-Refined Classification
Authors: Jian Yang, Kai Zhu, Kecheng Zheng, Yang Cao
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on widely used benchmarks (i.e., IIRC-CIFAR, IIRC-Image Net-lite, IIRC-Image Net-Subset, and IIRC-Image Net-full) demonstrate the superiority of our proposed method over the state-of-the-art approaches. |
| Researcher Affiliation | Collaboration | Jian Yang1 Kai Zhu1, , Kecheng Zheng2 Yang Cao1,3, 1 University of Science and Technology of China 2 Ant Group 3 Institute of Artificial Intelligence, Hefei Comprehensive National Science Center |
| Pseudocode | Yes | Algorithm 1 Acquisition of Hierarchical Semantic Relationship |
| Open Source Code | Yes | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See Supplementary Materials. |
| Open Datasets | Yes | According to the semantic relevance among labels, CIFAR100 [13] and Image Net [6] datasets are rearranged to form the two-level hierarchy datasets [1]. |
| Dataset Splits | Yes | The validation set also follows the incomplete information setting. |
| Hardware Specification | No | The paper states, 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Supplementary Materials.' However, the main paper itself does not explicitly describe the hardware used for experiments. |
| Software Dependencies | No | The paper mentions methods and components like 'RBF kernel' and 'BCEWith Logits Loss', but it does not specify software dependencies (e.g., libraries, frameworks) with version numbers. |
| Experiment Setup | Yes | Specifically, we use the RBF distance [19] to calculate the representation extension loss. ... γ denotes hyper-parameters for balancing the losses. Moreover, in our experiments, γ is set as 10.0. A detailed description of the hyperparameter selection is shown in supplementary material B.2. ... IIRC-CIFAR. Ten superclasses are set up, each with about 4 to 8 subclasses. In incremental phases, each new phase introduces five classes. IIRC-CIFAR involves 22 phases with ten preset class orders called phase configuration for multiple tests. |