ChatGPT-Powered Hierarchical Comparisons for Image Classification

Authors: Zhiyuan Ren, Yiyang Su, Xiaoming Liu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments and analyses, we demonstrate that our proposed approach is intuitive, effective, and explainable.
Researcher Affiliation Academia Department of Computer Science and Engineering, Michigan State University East Lansing, MI 48824 {renzhiy1, suyiyan1, liuxm}@msu.edu
Pseudocode Yes Figure 3: Psuedo-code for building the knowledge trees.
Open Source Code Yes Code is available here.
Open Datasets Yes We conduct experiments on six different image classification benchmarks, i.e. Image Net [10], CUB [43], Food101 [2], Place365 [26], Oxford Pets [50], and Describable Textures [7]
Dataset Splits Yes In line with the methodology employed by [29], we expand the Image Net validation by introducing two new categories, each containing five additional images.
Hardware Specification Yes On a single Nvidia RTX A6000 GPU, it is feasible to replicate all the results of our paper within approximately two hours.
Software Dependencies No The paper mentions using 'CLIP Vi T-L/14' and 'Chat GPT' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes There are four hyperparameters in our method, including the number of groups N in the k-means algorithm [27], the threshold l for leaf nodes, the weight λ assigned to score offset, and the tolerance τ for score reduction. ... We generally set l to 2 or 3 and τ to 0.