VLG-CBM: Training Concept Bottleneck Models with Vision-Language Guidance

Authors: Divyansh Srivastava, Ge Yan, Lily Weng

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive evaluations across five standard benchmarks show that our method, VLG-CBM, outperforms existing methods by at least 4.27% and up to 51.09% on Accuracy at NEC=5 (denoted as ANEC-5), and by at least 0.45% and up to 29.78% on average accuracy (denoted as ANEC-avg), while preserving both faithfulness and interpretability of the learned concepts as demonstrated in extensive experiments.
Researcher Affiliation Academia Divyansh Srivastava , Ge Yan , Tsui-Wei Weng {ddivyansh, geyan, lweng}@ucsd.edu UC San Diego
Pseudocode No The paper describes its pipeline and methods in prose and diagrams (e.g., Figure 2) but does not include formal pseudocode blocks or algorithms.
Open Source Code Yes Our code is available at https://github.com/Trustworthy-ML-Lab/VLG-CBM
Open Datasets Yes Following prior work [15], we conduct experiments on five image recognition datasets: CIFAR10, CIFAR100[7], CUB[23], Places365[30] and Image Net[18].
Dataset Splits Yes We tune the hyperparameters for our method using 10% of the training data as validation for the CIFAR10, CIFAR100, CUB and Image Net datasets. For Places365, we use 5% of the training data as validation.
Hardware Specification Yes Our experiments run on a server with 10 CPU cores, 64 GB RAM, and 1 Nvidia 2080Ti GPU.
Software Dependencies No The paper mentions optimizers and models like Adam[5], GLM-SAGA[24], CLIP-RN50, ResNet-18/50. However, it does not provide specific version numbers for underlying software frameworks (e.g., PyTorch, TensorFlow, CUDA) or other key libraries.
Experiment Setup Yes We tune the CBL with Adam[5] optimizer with learning rate 1e-4 and weight decay 1e-5. ... We set T = 0.15 in Eq. (2) in all our experiments.