Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Learning to Learn Variational Semantic Memory
Authors: Xiantong Zhen, Yingjun Du, Huan Xiong, Qiang Qiu, Cees Snoek, Ling Shao
NeurIPS 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that the probabilistic modelling of prototypes achieves a more informative representation of object classes compared to deterministic vectors. The consistent new state-of-the-art performance on four benchmarks shows the benefit of variational semantic memory in boosting few-shot recognition. |
| Researcher Affiliation | Collaboration | 1AIM Lab, University of Amsterdam, Netherlands 2Inception Institute of Artificial Intelligence, Abu Dhabi, UAE 3Harbin Institute of Technology, Harbin, China 4Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE 5Electrical and Computer Engineering, Purdue University, USA |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access information (e.g., a specific link or explicit statement of release in supplementary materials) for its source code. |
| Open Datasets | Yes | We evaluate our model on four standard few-shot classification tasks: mini Image Net [71], tiered Image Net [53], CIFAR-FS [4] and Omniglot [33]. |
| Dataset Splits | No | No explicit details on specific dataset splits (e.g., percentages, sample counts for train/validation/test) are provided in the main text. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers are provided in the paper. |
| Experiment Setup | No | More implementation details, including optimization settings and network architectures, are given in the supplementary material. (The main paper does not provide specific hyperparameter values or training configurations). |