TPR: Topology-Preserving Reservoirs for Generalized Zero-Shot Learning

Authors: Hui Chen, Yanbin Liu, Yongqiang Ma, Nanning Zheng, Xin Yu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on twelve object recognition datasets demonstrate that our model, termed Topology-Preserving Reservoir (TPR), outperforms strong baselines including both prompt learning and conventional generative-based zero-shot methods.
Researcher Affiliation Academia 1National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center of Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi an Jiaotong University, 2Auckland University of Technology, 3The University of Queensland
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes We provide open access to the code via the Anonymous Git Hub link (https://anonymous.4open.science/r/TPR-3D0A/).
Open Datasets Yes We validate the effectiveness of TPR on four widely-used datasets in GZSL: Aw A2 [51], CUB [42], FLO [52], and SUN [53]. We follow the commonly-used dataset split [26] but use the generated textual descriptions instead of attribute annotation.
Dataset Splits Yes Following common practice in GZSL, the seen dataset S is split into a training set Ds tr and a test set Ds te, while the unseen dataset U constitutes the test set Du te. Then, the model is trained on Ds tr and evaluated on the union set Ds te Du te. ...optimal hyperparameters of TPR are chosen from the following ranges: ... which are tuned on the validation set via grid search.
Hardware Specification Yes TPR is trained for 200 epochs with a batch-size of 512 via Adam optimizer on a single NVIDIA RTX4090 GPU.
Software Dependencies No The paper mentions using specific models and tools like CLIP, Bert, and Chat GPT, but does not provide specific version numbers for software dependencies such as deep learning frameworks (e.g., PyTorch, TensorFlow) or other libraries.
Experiment Setup Yes TPR is trained for 200 epochs with a batch-size of 512 via Adam optimizer on a single NVIDIA RTX4090 GPU. Overall, optimal hyperparameters of TPR are chosen from the following ranges: learning rate {1e-5, 3e-5, 5e-5, 7e-5, 1e-4}, η {0.2, 0.5, 1.0, 2.0}, β {1e-4, 5e-4, 1e-3}, τ {0.03, 0.05, 0.07, 0.10}, and N2 {100, 200, 300, 400}, which are tuned on the validation set via grid search.